Prep 0.20.0 (#77)
* fix issue where recent save cover images would sometimes not load * paraphrase prompt tweaks * action_to_narration regenerate compatibility fixes * sim suite add asnwer question instruction * more sim suite tweaks * refactor agent details display in agent bar * visual agent progres (a1111 support) * visual gen prompt tweaks * openai compat client pass max_tokens * world state sequential reinforcement max tokens tightened * improve item names * Improve item names * attempt to remove "changed from.." notes when altering an existing character sheet * prompt improvements for single character portraits * visual agent progress * fix issue where character.update wouldn't update long-term memory * remove experimental flag for now * add better instructions for updating existing character sheet * background processing for agents, visual and tts * fix selected voice not saving between restarts for elevenlabs * lessen timeout * clean up agent status logic * conditional agent configs * comfyui support * visualization queue * refactor visual styles, comfyui progress * regen images auto cover image assign websocket handler plugin abstraction agent websocket handler * automatic1111 fixes agent status and ready checks * tweaks to character portrait prompt * system prompt for visualize * textgenwebui use temp smoothing on yi models * comment out api key for now * fixes issues with openai compat client for retaining api key and auto fixing urls * update_reinforcment tweaks * agent status emit from one place * emit agent status as asyncio task * remove debug output * tts add openai support * openai img gen support * fix issue with confyui checkbox list not loading * tts model selection for openai * narrate_query include character sheet if character is referenced in query improve visual character portrit generation prompt * client implementation extra field support and runpod vllm client example * relock * fix issue where changing context length would cause next generation to error * visual agent tweaks and auto gen character cover image in sim suite * fix issue with readyness lock when there werent any clients defined * load scene readiness fixes * linting * docs * notes for the runpod vllm example
35
README.md
|
@ -29,7 +29,8 @@ This means you need to either have:
|
|||
- editor: improves AI responses (very hit and miss at the moment)
|
||||
- world state: generates world snapshot and handles passage of time (objects and characters)
|
||||
- creator: character / scenario creator
|
||||
- tts: text to speech via elevenlabs, coqui studio, coqui local
|
||||
- tts: text to speech via elevenlabs, OpenAI or local tts
|
||||
- visual: stable-diffusion client for in place visual generation via AUTOMATIC1111, ComfyUI or OpenAI
|
||||
- multi-client support (agents can be connected to separate APIs)
|
||||
- long term memory
|
||||
- chromadb integration
|
||||
|
@ -54,7 +55,6 @@ Kinda making it up as i go along, but i want to lean more into gameplay through
|
|||
|
||||
In no particular order:
|
||||
|
||||
|
||||
- Extension support
|
||||
- modular agents and clients
|
||||
- Improved world state
|
||||
|
@ -68,7 +68,26 @@ In no particular order:
|
|||
- objectives
|
||||
- quests
|
||||
- win / lose conditions
|
||||
- stable-diffusion client for in place visual generation
|
||||
|
||||
|
||||
# Instructions
|
||||
|
||||
Please read the documents in the `docs` folder for more advanced configuration and usage.
|
||||
|
||||
- [Quickstart](#quickstart)
|
||||
- [Installation](#installation)
|
||||
- [Connecting to an LLM](#connecting-to-an-llm)
|
||||
- [Text-generation-webui](#text-generation-webui)
|
||||
- [Recommended Models](#recommended-models)
|
||||
- [OpenAI](#openai)
|
||||
- [Ready to go](#ready-to-go)
|
||||
- [Load the introductory scenario "Infinity Quest"](#load-the-introductory-scenario-infinity-quest)
|
||||
- [Loading character cards](#loading-character-cards)
|
||||
- [Text-to-Speech (TTS)](docs/tts.md)
|
||||
- [Visual Generation](docs/visual.md)
|
||||
- [ChromaDB (long term memory) configuration](docs/chromadb.md)
|
||||
- [Runpod Integration](docs/runpod.md)
|
||||
- [Prompt template overrides](docs/templates.md)
|
||||
|
||||
# Quickstart
|
||||
|
||||
|
@ -174,13 +193,3 @@ Expand the "Load" menu in the top left corner and either click on "Upload a char
|
|||
Once a character is uploaded, talemate may actually take a moment because it needs to convert it to a talemate format and will also run additional LLM prompts to generate character attributes and world state.
|
||||
|
||||
Make sure you save the scene after the character is loaded as it can then be loaded as normal talemate scenario in the future.
|
||||
|
||||
## Further documentation
|
||||
|
||||
Please read the documents in the `docs` folder for more advanced configuration and usage.
|
||||
|
||||
- [Prompt template overrides](docs/templates.md)
|
||||
- [Text-to-Speech (TTS)](docs/tts.md)
|
||||
- [ChromaDB (long term memory)](docs/chromadb.md)
|
||||
- [Runpod Integration](docs/runpod.md)
|
||||
- Creative mode
|
||||
|
|
130
docs/dev/client/example/runpod_vllm/__init__.py
Normal file
|
@ -0,0 +1,130 @@
|
|||
"""
|
||||
An attempt to write a client against the runpod serverless vllm worker.
|
||||
|
||||
This is close to functional, but since runpod serverless gpu availability is currently terrible, i have
|
||||
been unable to properly test it.
|
||||
|
||||
Putting it here for now since i think it makes a decent example of how to write a client against a new service.
|
||||
"""
|
||||
|
||||
import pydantic
|
||||
import structlog
|
||||
import runpod
|
||||
import asyncio
|
||||
import aiohttp
|
||||
from talemate.client.base import ClientBase, ExtraField
|
||||
from talemate.client.registry import register
|
||||
from talemate.emit import emit
|
||||
from talemate.config import Client as BaseClientConfig
|
||||
|
||||
log = structlog.get_logger("talemate.client.runpod_vllm")
|
||||
|
||||
class Defaults(pydantic.BaseModel):
|
||||
max_token_length: int = 4096
|
||||
model: str = ""
|
||||
runpod_id: str = ""
|
||||
|
||||
class ClientConfig(BaseClientConfig):
|
||||
runpod_id: str = ""
|
||||
|
||||
@register()
|
||||
class RunPodVLLMClient(ClientBase):
|
||||
client_type = "runpod_vllm"
|
||||
conversation_retries = 5
|
||||
config_cls = ClientConfig
|
||||
|
||||
class Meta(ClientBase.Meta):
|
||||
title: str = "Runpod VLLM"
|
||||
name_prefix: str = "Runpod VLLM"
|
||||
enable_api_auth: bool = True
|
||||
manual_model: bool = True
|
||||
defaults: Defaults = Defaults()
|
||||
extra_fields: dict[str, ExtraField] = {
|
||||
"runpod_id": ExtraField(
|
||||
name="runpod_id",
|
||||
type="text",
|
||||
label="Runpod ID",
|
||||
required=True,
|
||||
description="The Runpod ID to connect to.",
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
def __init__(self, model=None, runpod_id=None, **kwargs):
|
||||
self.model_name = model
|
||||
self.runpod_id = runpod_id
|
||||
super().__init__(**kwargs)
|
||||
|
||||
@property
|
||||
def experimental(self):
|
||||
return False
|
||||
|
||||
|
||||
def set_client(self, **kwargs):
|
||||
log.debug("set_client", kwargs=kwargs, runpod_id=self.runpod_id)
|
||||
self.runpod_id = kwargs.get("runpod_id", self.runpod_id)
|
||||
|
||||
|
||||
def tune_prompt_parameters(self, parameters: dict, kind: str):
|
||||
super().tune_prompt_parameters(parameters, kind)
|
||||
|
||||
keys = list(parameters.keys())
|
||||
|
||||
valid_keys = ["temperature", "top_p", "max_tokens"]
|
||||
|
||||
for key in keys:
|
||||
if key not in valid_keys:
|
||||
del parameters[key]
|
||||
|
||||
async def get_model_name(self):
|
||||
return self.model_name
|
||||
|
||||
async def generate(self, prompt: str, parameters: dict, kind: str):
|
||||
"""
|
||||
Generates text from the given prompt and parameters.
|
||||
"""
|
||||
prompt = prompt.strip()
|
||||
|
||||
self.log.debug("generate", prompt=prompt[:128] + " ...", parameters=parameters)
|
||||
|
||||
try:
|
||||
|
||||
async with aiohttp.ClientSession() as session:
|
||||
endpoint = runpod.AsyncioEndpoint(self.runpod_id, session)
|
||||
|
||||
run_request = await endpoint.run({
|
||||
"input": {
|
||||
"prompt": prompt,
|
||||
}
|
||||
#"parameters": parameters
|
||||
})
|
||||
|
||||
while (await run_request.status()) not in ["COMPLETED", "FAILED", "CANCELLED"]:
|
||||
status = await run_request.status()
|
||||
log.debug("generate", status=status)
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
status = await run_request.status()
|
||||
|
||||
log.debug("generate", status=status)
|
||||
|
||||
response = await run_request.output()
|
||||
|
||||
log.debug("generate", response=response)
|
||||
|
||||
return response["choices"][0]["tokens"][0]
|
||||
|
||||
except Exception as e:
|
||||
self.log.error("generate error", e=e)
|
||||
emit(
|
||||
"status", message="Error during generation (check logs)", status="error"
|
||||
)
|
||||
return ""
|
||||
|
||||
def reconfigure(self, **kwargs):
|
||||
if kwargs.get("model"):
|
||||
self.model_name = kwargs["model"]
|
||||
if "runpod_id" in kwargs:
|
||||
self.api_auth = kwargs["runpod_id"]
|
||||
log.warning("reconfigure", kwargs=kwargs)
|
||||
self.set_client(**kwargs)
|
BIN
docs/img/0.20.0/comfyui-base-workflow.png
Normal file
After Width: | Height: | Size: 128 KiB |
BIN
docs/img/0.20.0/visual-config-a1111.png
Normal file
After Width: | Height: | Size: 32 KiB |
BIN
docs/img/0.20.0/visual-config-comfyui.png
Normal file
After Width: | Height: | Size: 34 KiB |
BIN
docs/img/0.20.0/visual-config-openai.png
Normal file
After Width: | Height: | Size: 30 KiB |
BIN
docs/img/0.20.0/visual-queue.png
Normal file
After Width: | Height: | Size: 933 KiB |
BIN
docs/img/0.20.0/visualize-scene-tools.png
Normal file
After Width: | Height: | Size: 13 KiB |
BIN
docs/img/0.20.0/visualizer-busy.png
Normal file
After Width: | Height: | Size: 3.5 KiB |
BIN
docs/img/0.20.0/visualizer-ready.png
Normal file
After Width: | Height: | Size: 2.9 KiB |
BIN
docs/img/0.20.0/visualze-new-images.png
Normal file
After Width: | Height: | Size: 1.8 KiB |
117
docs/visual.md
Normal file
|
@ -0,0 +1,117 @@
|
|||
# Visual Agent
|
||||
|
||||
The visual agent currently allows for some bare bones visual generation using various stable-diffusion APIs. This is early development and experimental.
|
||||
|
||||
Its important to note that the visualization agent actually specifies two clients. One is the backend for the visual generation, and the other is the text generation client to use for prompt generation.
|
||||
|
||||
The client for prompt generation can be assigned to the agent as you would for any other agent. The client for visual generation is assigned in the Visualizer config.
|
||||
|
||||
## Index
|
||||
|
||||
- [OpenAI](#openai)
|
||||
- [AUTOMATIC1111](#automatic1111)
|
||||
- [ComfyUI](#comfyui)
|
||||
- [How to use](#how-to-use)
|
||||
|
||||
## OpenAI
|
||||
|
||||
Most straightforward to use, as it runs on the OpenAI API. You will need to have an API key and set it in the application config.
|
||||
|
||||

|
||||
|
||||
Then open the Visualizer config by clicking the agent's name in the agent list and choose `OpenAI` as the backend.
|
||||
|
||||

|
||||
|
||||
Note: `Client` here refers to the text-generation client to use for prompt generation. While `Backend` refers to the visual generation backend. You are **NOT** required to use the OpenAI client for prompt generation even if you are using the OpenAI backend for image generation.
|
||||
|
||||
## AUTOMATIC1111
|
||||
|
||||
This requires you to setup a local instance of the AUTOMATIC1111 API. Follow the instructions from their [GitHub](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to get it running.
|
||||
|
||||
Once you have it running, you will want to adjust the `webui-user.bat` in the AUTOMATIC1111 directory to include the following command arguments:
|
||||
|
||||
```bat
|
||||
set COMMANDLINE_ARGS=--api --listen --port 7861
|
||||
```
|
||||
|
||||
Then run the `webui-user.bat` to start the API.
|
||||
|
||||
Once your AUTOAMTIC1111 API is running (check with your browser) you can set the Visualizer config to use the `AUTOMATIC1111` backend
|
||||
|
||||

|
||||
|
||||
#### Extra Configuration
|
||||
|
||||
- `api url`: the url of the API, usually `http://localhost:7861`
|
||||
- `steps`: render steps
|
||||
- `model type`: sdxl or sd1.5 - this will dictate the resolution of the image generation and actually matters for the quality so make sure this is set to the correct model type for the model you are using.
|
||||
|
||||
## ComfyUI
|
||||
|
||||
This requires you to setup a local instance of the ComfyUI API. Follow the instructions from their [GitHub](https://github.com/comfyanonymous/ComfyUI) to get it running.
|
||||
|
||||
Once you're setup, copy their `start.bat` file to a new `start-listen.bat` file and change the contents to.
|
||||
|
||||
```bat
|
||||
call venv\Scripts\activate
|
||||
call python main.py --port 8188 --listen 0.0.0.0
|
||||
```
|
||||
|
||||
Then run the `start-listen.bat` to start the API.
|
||||
|
||||
Once your ComfyUI API is running (check with your browser) you can set the Visualizer config to use the `ComfyUI` backend.
|
||||
|
||||

|
||||
|
||||
### Extra Configuration
|
||||
|
||||
- `api url`: the url of the API, usually `http://localhost:8188`
|
||||
- `workflow`: the workflow file to use. This is a comfyui api workflow file that needs to exist in `./templates/comfyui-workflows` inside the talemate directory. Talemate provides two very barebones workflows with `default-sdxl.json` and `default-sd15.json`. You can create your own workflows and place them in this directory to use them. :warning: The workflow file must be generated using the API Workflow export not the UI export. Please refer to their documentation for more information.
|
||||
- `checkpoint`: the model to use - this will load a list of all available models in your comfyui instance. Select which one you want to use for the image generation.
|
||||
|
||||
### Custom Workflows
|
||||
|
||||
When creating custom workflows for ideal compatibility with Talemate, ensure the following.
|
||||
|
||||
- A `CheckpointLoaderSimple` node named `Talemate Load Checkpoint`
|
||||
- A `EmptyLatentImage` node name `Talemate Resolution`
|
||||
- A `ClipTextEncode` node named `Talemate Positive Prompt`
|
||||
- A `ClipTextEncode` node named `Talemate Negative Prompt`
|
||||
- A `SaveImage` node at the end of the workflow.
|
||||
|
||||

|
||||
|
||||
## How to use
|
||||
|
||||
Once you're done setting up the visualizer agent should have a green dot next to it and display both the selected image generation backend and the selected prompt generation client.
|
||||
|
||||

|
||||
|
||||
Your hotbar should then also enable the visualization menu for you to use (once you have a scene loaded).
|
||||
|
||||

|
||||
|
||||
Right now you can generate a portrait for any NPC in the scene or a background image for the scene itself.
|
||||
|
||||
Image generation by default will actually happen in the background, allowing you to continue using Talemate while the image is being generated.
|
||||
|
||||
You can tell if an image is being generated by the blueish spinner next to the visualization agent.
|
||||
|
||||

|
||||
|
||||
Once the image is generated, it will be avaible for you to view via the visual queue button on top of the screen.
|
||||
|
||||

|
||||
|
||||
Click it to open the visual queue and view the generated images.
|
||||
|
||||

|
||||
|
||||
### Character Portrait
|
||||
|
||||
For character potraits you can chose whether or not to replace the main portrait for the character (the one being displated in the left sidebar when a talemate scene is active).
|
||||
|
||||
### Background Image
|
||||
|
||||
Right now there is nothing to do with the background image, other than to view it in the visual queue. More functionality will be added in the future.
|
1619
poetry.lock
generated
|
@ -4,7 +4,7 @@ build-backend = "poetry.masonry.api"
|
|||
|
||||
[tool.poetry]
|
||||
name = "talemate"
|
||||
version = "0.19.0"
|
||||
version = "0.20.0"
|
||||
description = "AI-backed roleplay and narrative tools"
|
||||
authors = ["FinalWombat"]
|
||||
license = "GNU Affero General Public License v3.0"
|
||||
|
|
|
@ -18,6 +18,7 @@ You must at least call one of the following functions:
|
|||
- set_player_persona
|
||||
- set_player_name
|
||||
- end_simulation
|
||||
- answer_question
|
||||
|
||||
Set the player persona at the beginning of a new simulation or if the player requests a change.
|
||||
|
||||
|
@ -52,7 +53,7 @@ Request: Computer, I want to experience a rollercoaster ride with a friend
|
|||
change_environment("theme park, riding a rollercoaster")
|
||||
set_player_persona("young female experiencing rollercoaster ride")
|
||||
set_player_name("Susanne")
|
||||
add_ai_character("a female friend of player")
|
||||
add_ai_character("a female friend of player named Sarah")
|
||||
```
|
||||
|
||||
Request: Computer, I want to experience the international space station
|
||||
|
@ -60,7 +61,7 @@ Request: Computer, I want to experience the international space station
|
|||
change_environment("international space station")
|
||||
set_player_persona("astronaut experiencing first trip to ISS")
|
||||
set_player_name("George")
|
||||
add_ai_character("astronaut")
|
||||
add_ai_character("astronaut named Henry")
|
||||
```
|
||||
|
||||
Request: Computer, remove the goblin and add an elven woman instead
|
||||
|
@ -77,19 +78,19 @@ change_ai_character("make skiing instructor older")
|
|||
Request: Computer, change my grandma to my grandpa
|
||||
```simulation-stack
|
||||
remove_ai_character("grandma")
|
||||
add_ai_character("grandpa")
|
||||
add_ai_character("grandpa named Steven")
|
||||
```
|
||||
|
||||
Request: Computer, remove the skiing instructor and add my friend instead.
|
||||
```simulation-stack
|
||||
remove_ai_character("skiing instructor")
|
||||
add_ai_character("player's friend")
|
||||
add_ai_character("player's friend named Tara")
|
||||
```
|
||||
|
||||
Request: Computer, replace the skiing instructor with my friend.
|
||||
```simulation-stack
|
||||
remove_ai_character("skiing instructor")
|
||||
add_ai_character("player's friend")
|
||||
add_ai_character("player's friend named Lisa")
|
||||
```
|
||||
|
||||
Request: Computer, I want to end the simulation
|
||||
|
@ -102,6 +103,11 @@ Request: Computer, shut down the simulation
|
|||
end_simulation("simulation ended")
|
||||
```
|
||||
|
||||
Request: Computer, what do you know about the game of thrones?
|
||||
```simulation-stack
|
||||
answer_question("what do you know about the game of thrones?")
|
||||
```
|
||||
|
||||
<|CLOSE_SECTION|>
|
||||
<|SECTION:TASK|>
|
||||
Respond with the simulation stack for the following request:
|
||||
|
|
|
@ -26,6 +26,12 @@
|
|||
{# change environment #}
|
||||
{% set _ = processed.append(call) %}
|
||||
|
||||
{% elif call.strip().startswith("answer_question") %}
|
||||
{# answert a query #}
|
||||
|
||||
{% set _ = agent_action("narrator", "action_to_narration", action_name="progress_story", narrative_direction="The computer calls the following function:\n"+call+"\nand answers the player's question.", emit_message=True) %}
|
||||
|
||||
|
||||
{% elif call.strip().startswith("set_player_persona") %}
|
||||
{# treansform player #}
|
||||
{% set _ = emit_status("busy", "Simulation suite altering user persona.", as_scene_message=True) %}
|
||||
|
@ -60,9 +66,10 @@
|
|||
{% set _ = emit_status("busy", "Simulation suite adding character: "+character_name, as_scene_message=True) %}
|
||||
{% set _ = debug("HOLODECK add npc", name=character_name)%}
|
||||
{% set npc = agent_action("director", "persist_character", name=character_name, content=player_message.raw )%}
|
||||
{% set _ = agent_action("world_state", "manager", action_name="add_detail_reinforcement", character_name=npc.name, question="Goal", instructions="Generate a goal for the character, based on the user's chosen simulation", interval=25, run_immediately=True) %}
|
||||
{% set _ = agent_action("world_state", "manager", action_name="add_detail_reinforcement", character_name=npc.name, question="Goal", instructions="Generate a goal for "+npc.name+", based on the user's chosen simulation", interval=25, run_immediately=True) %}
|
||||
{% set _ = debug("HOLODECK added npc", npc=npc) %}
|
||||
{% set _ = processed.append(call) %}
|
||||
{% set _ = agent_action("visual", "generate_character_portrait", character_name=npc.name) %}
|
||||
{% elif call.strip().startswith("remove_ai_character") %}
|
||||
{# remove npc #}
|
||||
|
||||
|
@ -80,12 +87,14 @@
|
|||
{# change existing npc #}
|
||||
|
||||
{% set _ = emit_status("busy", "Simulation suite altering character.", as_scene_message=True) %}
|
||||
{% set character_name = agent_action("creator", "determine_character_name", character_name=inject+" - what is the name of the character receiving the changes?", allowed_names=scene.npc_character_names) %}
|
||||
{% set character_name = agent_action("creator", "determine_character_name", character_name=inject+" - what is the name of the character receiving the changes (before the change)?", allowed_names=scene.npc_character_names) %}
|
||||
|
||||
{% set character_name_after = agent_action("creator", "determine_character_name", character_name=inject+" - what is the name of the character receiving the changes (after the changes)?") %}
|
||||
|
||||
{% set npc = scene.get_character(character_name) %}
|
||||
|
||||
{% if npc %}
|
||||
{% set _ = emit_status("busy", "Changing "+character_name, as_scene_message=True) %}
|
||||
{% set _ = emit_status("busy", "Changing "+character_name+" -> "+character_name_after, as_scene_message=True) %}
|
||||
{% set _ = debug("HOLODECK transform npc", npc=npc) %}
|
||||
{% set character_attributes = agent_action("world_state", "extract_character_sheet", name=npc.name, alteration_instructions=player_message.raw)%}
|
||||
{% set _ = npc.update(base_attributes=character_attributes) %}
|
||||
|
@ -93,15 +102,18 @@
|
|||
{% set _ = npc.update(description=character_description) %}
|
||||
{% set _ = debug("HOLODECK transform npc", attributes=character_attributes, description=character_description) %}
|
||||
{% set _ = processed.append(call) %}
|
||||
{% if character_name_after != character_name %}
|
||||
{% set _ = npc.rename(character_name_after) %}
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
{% elif call.strip().startswith("end_simulation") %}
|
||||
{# end simulation #}
|
||||
{% set explicit_command = query_text_eval("has the player explicitly asked to end the simulation?", player_message.raw) %}
|
||||
{% if explicit_command %}
|
||||
{% set _ = emit_status("busy", "Simulation suite ending current simulation.", as_scene_message=True) %}
|
||||
{% set _ = agent_action("narrator", "action_to_narration", action_name="progress_story", narrative_direction="The computer ends the simulation, disolving the environment and all artifical characters, erasing all memory of it and finally returning the player to the inactive simulation suite.", emit_message=True) %}
|
||||
{% set _ = agent_action("narrator", "action_to_narration", action_name="progress_story", narrative_direction="The computer ends the simulation, disolving the environment and all artifical characters, erasing all memory of it and finally returning the player to the inactive simulation suite.List of artificial characters: "+(",".join(scene.npc_character_names))+". The player is also transformed back to their normal persona.", emit_message=True) %}
|
||||
{% set _ = scene.sync_restore() %}
|
||||
{% set update_world_state = True %}
|
||||
{% set _ = agent_action("world_state", "update_world_state", force=True) %}
|
||||
{% set simulation_reset = True %}
|
||||
{% endif %}
|
||||
{% elif "(" in call.strip() %}
|
||||
|
@ -122,7 +134,7 @@
|
|||
{% set _ = emit_status("busy", "Simulation suite powering up.", as_scene_message=True) %}
|
||||
{% set _ = game_state.set_var("instr.simulation_started", "yes", commit=False) %}
|
||||
{% set _ = agent_action("narrator", "action_to_narration", action_name="progress_story", narrative_direction="Narrate the computer asking the user to state the nature of their desired simulation.", emit_message=False) %}
|
||||
{% set _ = agent_action("narrator", "action_to_narration", action_name="paraphrase", narration="Please state your commands by addressing the computer by stating \"Computer,\" followed by an instruction.") %}
|
||||
{% set _ = agent_action("narrator", "action_to_narration", action_name="passthrough", narration="Please state your commands by addressing the computer by stating \"Computer,\" followed by an instruction.") %}
|
||||
|
||||
{# pin to make sure characters don't try to interact with the simulation #}
|
||||
{% set _ = agent_action("world_state", "manager", action_name="save_world_entry", entry_id="sim.quarantined", text="Characters in the simulation ARE NOT AWARE OF THE COMPUTER.", meta=make_dict(), pin=True) %}
|
||||
|
|
|
@ -2,4 +2,4 @@ from .agents import Agent
|
|||
from .client import TextGeneratorWebuiClient
|
||||
from .tale_mate import *
|
||||
|
||||
VERSION = "0.19.0"
|
||||
VERSION = "0.20.0"
|
||||
|
|
|
@ -8,4 +8,5 @@ from .narrator import NarratorAgent
|
|||
from .registry import AGENT_CLASSES, get_agent_class, register
|
||||
from .summarize import SummarizeAgent
|
||||
from .tts import TTSAgent
|
||||
from .visual import VisualAgent
|
||||
from .world_state import WorldStateAgent
|
||||
|
|
|
@ -20,6 +20,11 @@ from talemate.events import GameLoopStartEvent
|
|||
|
||||
__all__ = [
|
||||
"Agent",
|
||||
"AgentAction",
|
||||
"AgentActionConditional",
|
||||
"AgentActionConfig",
|
||||
"AgentDetail",
|
||||
"AgentEmission",
|
||||
"set_processing",
|
||||
]
|
||||
|
||||
|
@ -43,11 +48,24 @@ class AgentActionConfig(pydantic.BaseModel):
|
|||
arbitrary_types_allowed = True
|
||||
|
||||
|
||||
class AgentActionConditional(pydantic.BaseModel):
|
||||
attribute: str
|
||||
value: Union[int, float, str, bool, None] = None
|
||||
|
||||
|
||||
class AgentAction(pydantic.BaseModel):
|
||||
enabled: bool = True
|
||||
label: str
|
||||
description: str = ""
|
||||
config: Union[dict[str, AgentActionConfig], None] = None
|
||||
condition: Union[AgentActionConditional, None] = None
|
||||
|
||||
|
||||
class AgentDetail(pydantic.BaseModel):
|
||||
value: Union[str, None] = None
|
||||
description: Union[str, None] = None
|
||||
icon: Union[str, None] = None
|
||||
color: str = "grey"
|
||||
|
||||
|
||||
def set_processing(fn):
|
||||
|
@ -86,6 +104,9 @@ class Agent(ABC):
|
|||
set_processing = set_processing
|
||||
requires_llm_client = True
|
||||
auto_break_repetition = False
|
||||
websocket_handler = None
|
||||
essential = True
|
||||
ready_check_error = None
|
||||
|
||||
@property
|
||||
def agent_details(self):
|
||||
|
@ -110,13 +131,20 @@ class Agent(ABC):
|
|||
|
||||
@property
|
||||
def status(self):
|
||||
if self.ready:
|
||||
if not self.enabled:
|
||||
return "disabled"
|
||||
return "idle" if getattr(self, "processing", 0) == 0 else "busy"
|
||||
else:
|
||||
|
||||
if not self.ready:
|
||||
return "uninitialized"
|
||||
|
||||
if getattr(self, "processing", 0) > 0:
|
||||
return "busy"
|
||||
|
||||
if getattr(self, "processing_bg", 0) > 0:
|
||||
return "busy_bg"
|
||||
|
||||
return "idle"
|
||||
|
||||
@property
|
||||
def enabled(self):
|
||||
# by default, agents are enabled, an agent class that
|
||||
|
@ -160,7 +188,41 @@ class Agent(ABC):
|
|||
|
||||
return config_options
|
||||
|
||||
def apply_config(self, *args, **kwargs):
|
||||
@property
|
||||
def meta(self):
|
||||
return {
|
||||
"essential": self.essential,
|
||||
}
|
||||
|
||||
async def _handle_ready_check(self, fut: asyncio.Future):
|
||||
callback_failure = getattr(self, "on_ready_check_failure", None)
|
||||
if fut.cancelled():
|
||||
if callback_failure:
|
||||
await callback_failure()
|
||||
return
|
||||
|
||||
if fut.exception():
|
||||
exc = fut.exception()
|
||||
self.ready_check_error = exc
|
||||
log.error("agent ready check error", agent=self.agent_type, exc=exc)
|
||||
if callback_failure:
|
||||
await callback_failure(exc)
|
||||
return
|
||||
|
||||
callback = getattr(self, "on_ready_check_success", None)
|
||||
if callback:
|
||||
await callback()
|
||||
|
||||
async def ready_check(self, task: asyncio.Task = None):
|
||||
self.ready_check_error = None
|
||||
if task:
|
||||
task.add_done_callback(
|
||||
lambda fut: asyncio.create_task(self._handle_ready_check(fut))
|
||||
)
|
||||
return
|
||||
return True
|
||||
|
||||
async def apply_config(self, *args, **kwargs):
|
||||
if self.has_toggle and "enabled" in kwargs:
|
||||
self.is_enabled = kwargs.get("enabled", False)
|
||||
|
||||
|
@ -228,27 +290,55 @@ class Agent(ABC):
|
|||
if getattr(self, "processing", None) is None:
|
||||
self.processing = 0
|
||||
|
||||
if not processing:
|
||||
if processing is False:
|
||||
self.processing -= 1
|
||||
self.processing = max(0, self.processing)
|
||||
else:
|
||||
elif processing is True:
|
||||
self.processing += 1
|
||||
|
||||
status = "busy" if self.processing > 0 else "idle"
|
||||
if not self.enabled:
|
||||
status = "disabled"
|
||||
|
||||
emit(
|
||||
"agent_status",
|
||||
message=self.verbose_name or "",
|
||||
id=self.agent_type,
|
||||
status=status,
|
||||
status=self.status,
|
||||
details=self.agent_details,
|
||||
meta=self.meta,
|
||||
data=self.config_options(agent=self),
|
||||
)
|
||||
|
||||
await asyncio.sleep(0.01)
|
||||
|
||||
async def _handle_background_processing(self, fut: asyncio.Future):
|
||||
try:
|
||||
if fut.cancelled():
|
||||
return
|
||||
|
||||
if fut.exception():
|
||||
log.error(
|
||||
"background processing error",
|
||||
agent=self.agent_type,
|
||||
exc=fut.exception(),
|
||||
)
|
||||
await self.emit_status()
|
||||
return
|
||||
|
||||
log.info("background processing done", agent=self.agent_type)
|
||||
finally:
|
||||
self.processing_bg -= 1
|
||||
await self.emit_status()
|
||||
|
||||
async def set_background_processing(self, task: asyncio.Task):
|
||||
log.info("set_background_processing", agent=self.agent_type)
|
||||
if not hasattr(self, "processing_bg"):
|
||||
self.processing_bg = 0
|
||||
|
||||
self.processing_bg += 1
|
||||
|
||||
await self.emit_status()
|
||||
task.add_done_callback(
|
||||
lambda fut: asyncio.create_task(self._handle_background_processing(fut))
|
||||
)
|
||||
|
||||
def connect(self, scene):
|
||||
self.scene = scene
|
||||
talemate.emit.async_signals.get("game_loop_start").connect(
|
||||
|
|
|
@ -30,7 +30,7 @@ if not chromadb:
|
|||
log.info("ChromaDB not found, disabling Chroma agent")
|
||||
|
||||
|
||||
from .base import Agent
|
||||
from .base import Agent, AgentDetail
|
||||
|
||||
|
||||
class MemoryDocument(str):
|
||||
|
@ -368,8 +368,30 @@ class ChromaDBMemoryAgent(MemoryAgent):
|
|||
|
||||
@property
|
||||
def agent_details(self):
|
||||
|
||||
details = {
|
||||
"backend": AgentDetail(
|
||||
icon="mdi-server-outline",
|
||||
value="ChromaDB",
|
||||
description="The backend to use for long-term memory",
|
||||
).model_dump(),
|
||||
"embeddings": AgentDetail(
|
||||
icon="mdi-cube-unfolded",
|
||||
value=self.embeddings,
|
||||
description="The embeddings model.",
|
||||
).model_dump(),
|
||||
}
|
||||
|
||||
if self.embeddings == "openai" and not self.openai_api_key:
|
||||
return "No OpenAI API key set"
|
||||
# return "No OpenAI API key set"
|
||||
details["error"] = {
|
||||
"icon": "mdi-alert",
|
||||
"value": "No OpenAI API key set",
|
||||
"description": "You must provide an OpenAI API key to use OpenAI embeddings",
|
||||
"color": "error",
|
||||
}
|
||||
|
||||
return details
|
||||
|
||||
return f"ChromaDB: {self.embeddings}"
|
||||
|
||||
|
|
|
@ -548,21 +548,69 @@ class NarratorAgent(Agent):
|
|||
|
||||
return response
|
||||
|
||||
async def passthrough(self, narration: str) -> str:
|
||||
"""
|
||||
Pass through narration message as is
|
||||
"""
|
||||
narration = narration.replace("*", "")
|
||||
narration = f"*{narration}*"
|
||||
narration = util.ensure_dialog_format(narration)
|
||||
return narration
|
||||
|
||||
def action_to_source(
|
||||
self,
|
||||
action_name: str,
|
||||
parameters: dict,
|
||||
) -> str:
|
||||
"""
|
||||
Generate a source string for a given action and parameters
|
||||
|
||||
The source string is used to identify the source of a NarratorMessage
|
||||
and will also help regenerate the action and parameters from the source string
|
||||
later on
|
||||
"""
|
||||
|
||||
args = []
|
||||
|
||||
if action_name == "paraphrase":
|
||||
args.append(parameters.get("narration"))
|
||||
elif action_name == "narrate_character_entry":
|
||||
args.append(parameters.get("character").name)
|
||||
# args.append(parameters.get("direction"))
|
||||
elif action_name == "narrate_character_exit":
|
||||
args.append(parameters.get("character").name)
|
||||
# args.append(parameters.get("direction"))
|
||||
elif action_name == "narrate_character":
|
||||
args.append(parameters.get("character").name)
|
||||
elif action_name == "narrate_query":
|
||||
args.append(parameters.get("query"))
|
||||
elif action_name == "narrate_time_passage":
|
||||
args.append(parameters.get("duration"))
|
||||
args.append(parameters.get("time_passed"))
|
||||
args.append(parameters.get("narrative"))
|
||||
elif action_name == "progress_story":
|
||||
args.append(parameters.get("narrative_direction"))
|
||||
elif action_name == "narrate_after_dialogue":
|
||||
args.append(parameters.get("character"))
|
||||
|
||||
arg_str = ";".join(args) if args else ""
|
||||
|
||||
return f"{action_name}:{arg_str}".rstrip(":")
|
||||
|
||||
async def action_to_narration(
|
||||
self,
|
||||
action_name: str,
|
||||
emit_message: bool = False,
|
||||
*args,
|
||||
**kwargs,
|
||||
):
|
||||
# calls self[action_name] and returns the result as a NarratorMessage
|
||||
# that is pushed to the history
|
||||
|
||||
fn = getattr(self, action_name)
|
||||
narration = await fn(*args, **kwargs)
|
||||
narrator_message = NarratorMessage(
|
||||
narration, source=f"{action_name}:{args[0] if args else ''}".rstrip(":")
|
||||
)
|
||||
narration = await fn(**kwargs)
|
||||
source = self.action_to_source(action_name, kwargs)
|
||||
|
||||
narrator_message = NarratorMessage(narration, source=source)
|
||||
self.scene.push_history(narrator_message)
|
||||
|
||||
if emit_message:
|
||||
|
|
|
@ -15,6 +15,7 @@ import nltk
|
|||
import pydantic
|
||||
import structlog
|
||||
from nltk.tokenize import sent_tokenize
|
||||
from openai import AsyncOpenAI
|
||||
|
||||
import talemate.config as config
|
||||
import talemate.emit.async_signals
|
||||
|
@ -24,7 +25,14 @@ from talemate.emit.signals import handlers
|
|||
from talemate.events import GameLoopNewMessageEvent
|
||||
from talemate.scene_message import CharacterMessage, NarratorMessage
|
||||
|
||||
from .base import Agent, AgentAction, AgentActionConfig, set_processing
|
||||
from .base import (
|
||||
Agent,
|
||||
AgentAction,
|
||||
AgentActionConditional,
|
||||
AgentActionConfig,
|
||||
AgentDetail,
|
||||
set_processing,
|
||||
)
|
||||
from .registry import register
|
||||
|
||||
try:
|
||||
|
@ -116,6 +124,7 @@ class TTSAgent(Agent):
|
|||
agent_type = "tts"
|
||||
verbose_name = "Voice"
|
||||
requires_llm_client = False
|
||||
essential = False
|
||||
|
||||
@classmethod
|
||||
def config_options(cls, agent=None):
|
||||
|
@ -135,9 +144,11 @@ class TTSAgent(Agent):
|
|||
self.voices = {
|
||||
"elevenlabs": VoiceLibrary(api="elevenlabs"),
|
||||
"tts": VoiceLibrary(api="tts"),
|
||||
"openai": VoiceLibrary(api="openai"),
|
||||
}
|
||||
self.config = config.load_config()
|
||||
self.playback_done_event = asyncio.Event()
|
||||
self.preselect_voice = None
|
||||
self.actions = {
|
||||
"_config": AgentAction(
|
||||
enabled=True,
|
||||
|
@ -149,6 +160,7 @@ class TTSAgent(Agent):
|
|||
choices=[
|
||||
{"value": "tts", "label": "TTS (Local)"},
|
||||
{"value": "elevenlabs", "label": "Eleven Labs"},
|
||||
{"value": "openai", "label": "OpenAI"},
|
||||
],
|
||||
value="tts",
|
||||
label="API",
|
||||
|
@ -188,6 +200,25 @@ class TTSAgent(Agent):
|
|||
),
|
||||
},
|
||||
),
|
||||
"openai": AgentAction(
|
||||
enabled=True,
|
||||
condition=AgentActionConditional(
|
||||
attribute="_config.config.api", value="openai"
|
||||
),
|
||||
label="OpenAI Settings",
|
||||
config={
|
||||
"model": AgentActionConfig(
|
||||
type="text",
|
||||
value="tts-1",
|
||||
choices=[
|
||||
{"value": "tts-1", "label": "TTS 1"},
|
||||
{"value": "tts-1-hd", "label": "TTS 1 HD"},
|
||||
],
|
||||
label="Model",
|
||||
description="TTS model to use",
|
||||
),
|
||||
},
|
||||
),
|
||||
}
|
||||
|
||||
self.actions["_config"].model_dump()
|
||||
|
@ -226,27 +257,45 @@ class TTSAgent(Agent):
|
|||
|
||||
@property
|
||||
def agent_details(self):
|
||||
suffix = ""
|
||||
|
||||
if not self.ready:
|
||||
suffix = f" - {self.not_ready_reason}"
|
||||
else:
|
||||
suffix = f" - {self.voice_id_to_label(self.default_voice_id)}"
|
||||
details = {
|
||||
"api": AgentDetail(
|
||||
icon="mdi-server-outline",
|
||||
value=self.api_label,
|
||||
description="The backend to use for TTS",
|
||||
).model_dump(),
|
||||
}
|
||||
|
||||
api = self.api
|
||||
choices = self.actions["_config"].config["api"].choices
|
||||
api_label = api
|
||||
for choice in choices:
|
||||
if choice["value"] == api:
|
||||
api_label = choice["label"]
|
||||
break
|
||||
if self.ready and self.enabled:
|
||||
details["voice"] = AgentDetail(
|
||||
icon="mdi-account-voice",
|
||||
value=self.voice_id_to_label(self.default_voice_id) or "",
|
||||
description="The voice to use for TTS",
|
||||
color="info",
|
||||
).model_dump()
|
||||
elif self.enabled:
|
||||
details["error"] = AgentDetail(
|
||||
icon="mdi-alert",
|
||||
value=self.not_ready_reason,
|
||||
description=self.not_ready_reason,
|
||||
color="error",
|
||||
).model_dump()
|
||||
|
||||
return f"{api_label}{suffix}"
|
||||
return details
|
||||
|
||||
@property
|
||||
def api(self):
|
||||
return self.actions["_config"].config["api"].value
|
||||
|
||||
@property
|
||||
def api_label(self):
|
||||
choices = self.actions["_config"].config["api"].choices
|
||||
api = self.api
|
||||
for choice in choices:
|
||||
if choice["value"] == api:
|
||||
return choice["label"]
|
||||
return api
|
||||
|
||||
@property
|
||||
def token(self):
|
||||
api = self.api
|
||||
|
@ -274,6 +323,8 @@ class TTSAgent(Agent):
|
|||
if not self.enabled:
|
||||
return "disabled"
|
||||
if self.ready:
|
||||
if getattr(self, "processing_bg", 0) > 0:
|
||||
return "busy_bg" if not getattr(self, "processing", False) else "busy"
|
||||
return "active" if not getattr(self, "processing", False) else "busy"
|
||||
if self.requires_token and not self.token:
|
||||
return "error"
|
||||
|
@ -291,7 +342,11 @@ class TTSAgent(Agent):
|
|||
|
||||
return 250
|
||||
|
||||
def apply_config(self, *args, **kwargs):
|
||||
@property
|
||||
def openai_api_key(self):
|
||||
return self.config.get("openai", {}).get("api_key")
|
||||
|
||||
async def apply_config(self, *args, **kwargs):
|
||||
try:
|
||||
api = kwargs["actions"]["_config"]["config"]["api"]["value"]
|
||||
except KeyError:
|
||||
|
@ -300,10 +355,22 @@ class TTSAgent(Agent):
|
|||
api_changed = api != self.api
|
||||
|
||||
log.debug(
|
||||
"apply_config", api=api, api_changed=api != self.api, current_api=self.api
|
||||
"apply_config",
|
||||
api=api,
|
||||
api_changed=api != self.api,
|
||||
current_api=self.api,
|
||||
args=args,
|
||||
kwargs=kwargs,
|
||||
)
|
||||
|
||||
super().apply_config(*args, **kwargs)
|
||||
try:
|
||||
self.preselect_voice = kwargs["actions"]["_config"]["config"]["voice_id"][
|
||||
"value"
|
||||
]
|
||||
except KeyError:
|
||||
self.preselect_voice = self.default_voice_id
|
||||
|
||||
await super().apply_config(*args, **kwargs)
|
||||
|
||||
if api_changed:
|
||||
try:
|
||||
|
@ -396,6 +463,11 @@ class TTSAgent(Agent):
|
|||
library.voices = await list_fn()
|
||||
library.last_synced = time.time()
|
||||
|
||||
if self.preselect_voice:
|
||||
if self.voice(self.preselect_voice):
|
||||
self.actions["_config"].config["voice_id"].value = self.preselect_voice
|
||||
self.preselect_voice = None
|
||||
|
||||
# if the current voice cannot be found, reset it
|
||||
if not self.voice(self.default_voice_id):
|
||||
self.actions["_config"].config["voice_id"].value = ""
|
||||
|
@ -421,9 +493,10 @@ class TTSAgent(Agent):
|
|||
|
||||
# Start generating audio chunks in the background
|
||||
generation_task = asyncio.create_task(self.generate_chunks(generate_fn, chunks))
|
||||
await self.set_background_processing(generation_task)
|
||||
|
||||
# Wait for both tasks to complete
|
||||
await asyncio.gather(generation_task)
|
||||
# await asyncio.gather(generation_task)
|
||||
|
||||
async def generate_chunks(self, generate_fn, chunks):
|
||||
for chunk in chunks:
|
||||
|
@ -547,3 +620,33 @@ class TTSAgent(Agent):
|
|||
voices.sort(key=lambda x: x.label)
|
||||
|
||||
return voices
|
||||
|
||||
# OPENAI
|
||||
|
||||
async def _generate_openai(self, text: str, chunk_size: int = 1024):
|
||||
|
||||
client = AsyncOpenAI(api_key=self.openai_api_key)
|
||||
|
||||
model = self.actions["openai"].config["model"].value
|
||||
|
||||
response = await client.audio.speech.create(
|
||||
model=model, voice=self.default_voice_id, input=text
|
||||
)
|
||||
|
||||
bytes_io = io.BytesIO()
|
||||
for chunk in response.iter_bytes(chunk_size=chunk_size):
|
||||
if chunk:
|
||||
bytes_io.write(chunk)
|
||||
|
||||
# Put the audio data in the queue for playback
|
||||
return bytes_io.getvalue()
|
||||
|
||||
async def _list_voices_openai(self) -> dict[str, str]:
|
||||
return [
|
||||
Voice(value="alloy", label="Alloy"),
|
||||
Voice(value="echo", label="Echo"),
|
||||
Voice(value="fable", label="Fable"),
|
||||
Voice(value="onyx", label="Onyx"),
|
||||
Voice(value="nova", label="Nova"),
|
||||
Voice(value="shimmer", label="Shimmer"),
|
||||
]
|
||||
|
|
452
src/talemate/agents/visual/__init__.py
Normal file
|
@ -0,0 +1,452 @@
|
|||
import asyncio
|
||||
import traceback
|
||||
|
||||
import structlog
|
||||
|
||||
import talemate.agents.visual.automatic1111
|
||||
import talemate.agents.visual.comfyui
|
||||
import talemate.agents.visual.openai_image
|
||||
from talemate.agents.base import (
|
||||
Agent,
|
||||
AgentAction,
|
||||
AgentActionConditional,
|
||||
AgentActionConfig,
|
||||
AgentDetail,
|
||||
set_processing,
|
||||
)
|
||||
from talemate.agents.registry import register
|
||||
from talemate.client.base import ClientBase
|
||||
from talemate.config import load_config
|
||||
from talemate.emit import emit
|
||||
from talemate.emit.signals import handlers as signal_handlers
|
||||
from talemate.prompts.base import Prompt
|
||||
|
||||
from .commands import * # noqa
|
||||
from .context import VIS_TYPES, VisualContext, visual_context
|
||||
from .handlers import HANDLERS
|
||||
from .schema import RESOLUTION_MAP, RenderSettings
|
||||
from .style import MAJOR_STYLES, STYLE_MAP, Style, combine_styles
|
||||
from .websocket_handler import VisualWebsocketHandler
|
||||
|
||||
__all__ = [
|
||||
"VisualAgent",
|
||||
]
|
||||
|
||||
BACKENDS = [
|
||||
{"value": mixin_backend, "label": mixin["label"]}
|
||||
for mixin_backend, mixin in HANDLERS.items()
|
||||
]
|
||||
|
||||
log = structlog.get_logger("talemate.agents.visual")
|
||||
|
||||
|
||||
class VisualBase(Agent):
|
||||
"""
|
||||
The visual agent
|
||||
"""
|
||||
|
||||
agent_type = "visual"
|
||||
verbose_name = "Visualizer"
|
||||
essential = False
|
||||
websocket_handler = VisualWebsocketHandler
|
||||
|
||||
ACTIONS = {}
|
||||
|
||||
def __init__(self, client: ClientBase, *kwargs):
|
||||
self.client = client
|
||||
self.is_enabled = False
|
||||
self.backend_ready = False
|
||||
self.initialized = False
|
||||
self.config = load_config()
|
||||
self.actions = {
|
||||
"_config": AgentAction(
|
||||
enabled=True,
|
||||
label="Configure",
|
||||
description="Visual agent configuration",
|
||||
config={
|
||||
"backend": AgentActionConfig(
|
||||
type="text",
|
||||
choices=BACKENDS,
|
||||
value="automatic1111",
|
||||
label="Backend",
|
||||
description="The backend to use for visual processing",
|
||||
),
|
||||
"default_style": AgentActionConfig(
|
||||
type="text",
|
||||
value="ink_illustration",
|
||||
choices=MAJOR_STYLES,
|
||||
label="Default Style",
|
||||
description="The default style to use for visual processing",
|
||||
),
|
||||
},
|
||||
),
|
||||
"automatic_generation": AgentAction(
|
||||
enabled=False,
|
||||
label="Automatic Generation",
|
||||
description="Allow automatic generation of visual content",
|
||||
),
|
||||
"process_in_background": AgentAction(
|
||||
enabled=True,
|
||||
label="Process in Background",
|
||||
description="Process renders in the background",
|
||||
),
|
||||
}
|
||||
|
||||
for action_name, action in self.ACTIONS.items():
|
||||
self.actions[action_name] = action
|
||||
|
||||
signal_handlers["config_saved"].connect(self.on_config_saved)
|
||||
|
||||
@property
|
||||
def enabled(self):
|
||||
return self.is_enabled
|
||||
|
||||
@property
|
||||
def has_toggle(self):
|
||||
return True
|
||||
|
||||
@property
|
||||
def experimental(self):
|
||||
return False
|
||||
|
||||
@property
|
||||
def backend(self):
|
||||
return self.actions["_config"].config["backend"].value
|
||||
|
||||
@property
|
||||
def backend_name(self):
|
||||
key = self.actions["_config"].config["backend"].value
|
||||
|
||||
for backend in BACKENDS:
|
||||
if backend["value"] == key:
|
||||
return backend["label"]
|
||||
|
||||
@property
|
||||
def default_style(self):
|
||||
return STYLE_MAP.get(
|
||||
self.actions["_config"].config["default_style"].value, Style()
|
||||
)
|
||||
|
||||
@property
|
||||
def ready(self):
|
||||
return self.backend_ready
|
||||
|
||||
@property
|
||||
def api_url(self):
|
||||
try:
|
||||
return self.actions[self.backend].config["api_url"].value
|
||||
except KeyError:
|
||||
return None
|
||||
|
||||
@property
|
||||
def agent_details(self):
|
||||
details = {
|
||||
"backend": AgentDetail(
|
||||
icon="mdi-server-outline",
|
||||
value=self.backend_name,
|
||||
description="The backend to use for visual processing",
|
||||
).model_dump(),
|
||||
"client": AgentDetail(
|
||||
icon="mdi-network-outline",
|
||||
value=self.client.name if self.client else None,
|
||||
description="The client to use for prompt generation",
|
||||
).model_dump(),
|
||||
}
|
||||
|
||||
if not self.ready and self.enabled:
|
||||
details["status"] = AgentDetail(
|
||||
icon="mdi-alert",
|
||||
value=f"{self.backend_name} not ready",
|
||||
color="error",
|
||||
description=self.ready_check_error
|
||||
or f"{self.backend_name} is not ready for processing",
|
||||
).model_dump()
|
||||
|
||||
return details
|
||||
|
||||
@property
|
||||
def process_in_background(self):
|
||||
return self.actions["process_in_background"].enabled
|
||||
|
||||
@property
|
||||
def allow_automatic_generation(self):
|
||||
return self.actions["automatic_generation"].enabled
|
||||
|
||||
def on_config_saved(self, event):
|
||||
config = event.data
|
||||
self.config = config
|
||||
asyncio.create_task(self.emit_status())
|
||||
|
||||
async def on_ready_check_success(self):
|
||||
prev_ready = self.backend_ready
|
||||
self.backend_ready = True
|
||||
if not prev_ready:
|
||||
await self.emit_status()
|
||||
|
||||
async def on_ready_check_failure(self, error):
|
||||
prev_ready = self.backend_ready
|
||||
self.backend_ready = False
|
||||
self.ready_check_error = str(error)
|
||||
if prev_ready:
|
||||
await self.emit_status()
|
||||
|
||||
async def ready_check(self):
|
||||
if not self.enabled:
|
||||
return
|
||||
backend = self.backend
|
||||
fn = getattr(self, f"{backend.lower()}_ready", None)
|
||||
task = asyncio.create_task(fn())
|
||||
await super().ready_check(task)
|
||||
|
||||
async def apply_config(self, *args, **kwargs):
|
||||
|
||||
try:
|
||||
backend = kwargs["actions"]["_config"]["config"]["backend"]["value"]
|
||||
except KeyError:
|
||||
backend = self.backend
|
||||
|
||||
backend_changed = backend != self.backend
|
||||
|
||||
if backend_changed:
|
||||
self.backend_ready = False
|
||||
|
||||
log.info(
|
||||
"apply_config",
|
||||
backend=backend,
|
||||
backend_changed=backend_changed,
|
||||
old_backend=self.backend,
|
||||
)
|
||||
|
||||
await super().apply_config(*args, **kwargs)
|
||||
backend_fn = getattr(self, f"{self.backend.lower()}_apply_config", None)
|
||||
if backend_fn:
|
||||
task = asyncio.create_task(
|
||||
backend_fn(backend_changed=backend_changed, *args, **kwargs)
|
||||
)
|
||||
await self.set_background_processing(task)
|
||||
|
||||
if not self.backend_ready:
|
||||
await self.ready_check()
|
||||
|
||||
self.initialized = True
|
||||
|
||||
def resolution_from_format(self, format: str, model_type: str = "sdxl"):
|
||||
if model_type not in RESOLUTION_MAP:
|
||||
raise ValueError(f"Model type {model_type} not found in resolution map")
|
||||
return RESOLUTION_MAP[model_type].get(
|
||||
format, RESOLUTION_MAP[model_type]["portrait"]
|
||||
)
|
||||
|
||||
def prepare_prompt(self, prompt: str, styles: list[Style] = None) -> Style:
|
||||
|
||||
prompt_style = Style()
|
||||
prompt_style.load(prompt)
|
||||
|
||||
if styles:
|
||||
prompt_style.prepend(*styles)
|
||||
|
||||
return prompt_style
|
||||
|
||||
def vis_type_styles(self, vis_type: str):
|
||||
if vis_type == VIS_TYPES.CHARACTER:
|
||||
portrait_style = STYLE_MAP["character_portrait"].copy()
|
||||
return portrait_style
|
||||
elif vis_type == VIS_TYPES.ENVIRONMENT:
|
||||
environment_style = STYLE_MAP["environment"].copy()
|
||||
return environment_style
|
||||
return Style()
|
||||
|
||||
async def apply_image(self, image: str):
|
||||
context = visual_context.get()
|
||||
|
||||
log.debug("apply_image", image=image[:100], context=context)
|
||||
|
||||
if context.vis_type == VIS_TYPES.CHARACTER:
|
||||
await self.apply_image_character(image, context.character_name)
|
||||
|
||||
async def apply_image_character(self, image: str, character_name: str):
|
||||
character = self.scene.get_character(character_name)
|
||||
|
||||
if not character:
|
||||
log.error("character not found", character_name=character_name)
|
||||
return
|
||||
|
||||
if character.cover_image:
|
||||
log.info("character cover image already set", character_name=character_name)
|
||||
return
|
||||
|
||||
asset = self.scene.assets.add_asset_from_image_data(
|
||||
f"data:image/png;base64,{image}"
|
||||
)
|
||||
character.cover_image = asset.id
|
||||
self.scene.assets.cover_image = asset.id
|
||||
self.scene.emit_status()
|
||||
|
||||
async def emit_image(self, image: str):
|
||||
context = visual_context.get()
|
||||
await self.apply_image(image)
|
||||
emit(
|
||||
"image_generated",
|
||||
websocket_passthrough=True,
|
||||
data={
|
||||
"base64": image,
|
||||
"context": context.model_dump() if context else None,
|
||||
},
|
||||
)
|
||||
|
||||
@set_processing
|
||||
async def generate(
|
||||
self, format: str = "portrait", prompt: str = None, automatic: bool = False
|
||||
):
|
||||
|
||||
context = visual_context.get()
|
||||
|
||||
if not self.enabled:
|
||||
log.warning("generate", skipped="Visual agent not enabled")
|
||||
return
|
||||
|
||||
if automatic and not self.allow_automatic_generation:
|
||||
log.warning(
|
||||
"generate",
|
||||
skipped="Automatic generation disabled",
|
||||
prompt=prompt,
|
||||
format=format,
|
||||
context=context,
|
||||
)
|
||||
return
|
||||
|
||||
if not context and not prompt:
|
||||
log.error("generate", error="No context or prompt provided")
|
||||
return
|
||||
|
||||
# Handle prompt generation based on context
|
||||
|
||||
if not prompt and context.prompt:
|
||||
prompt = context.prompt
|
||||
|
||||
if context.vis_type == VIS_TYPES.ENVIRONMENT and not prompt:
|
||||
prompt = await self.generate_environment_prompt(
|
||||
instructions=context.instructions
|
||||
)
|
||||
elif context.vis_type == VIS_TYPES.CHARACTER and not prompt:
|
||||
prompt = await self.generate_character_prompt(
|
||||
context.character_name, instructions=context.instructions
|
||||
)
|
||||
else:
|
||||
prompt = prompt or context.prompt
|
||||
|
||||
initial_prompt = prompt
|
||||
|
||||
# Augment the prompt with styles based on context
|
||||
|
||||
thematic_style = self.default_style
|
||||
vis_type_styles = self.vis_type_styles(context.vis_type)
|
||||
prompt = self.prepare_prompt(prompt, [vis_type_styles, thematic_style])
|
||||
|
||||
if not prompt:
|
||||
log.error(
|
||||
"generate", error="No prompt provided and no context to generate from"
|
||||
)
|
||||
return
|
||||
|
||||
context.prompt = initial_prompt
|
||||
context.prepared_prompt = str(prompt)
|
||||
|
||||
# Handle format (can either come from context or be passed in)
|
||||
|
||||
if not format and context.format:
|
||||
format = context.format
|
||||
elif not format:
|
||||
format = "portrait"
|
||||
|
||||
context.format = format
|
||||
|
||||
# Call the backend specific generate function
|
||||
|
||||
backend = self.backend
|
||||
fn = f"{backend.lower()}_generate"
|
||||
|
||||
log.info(
|
||||
"generate", backend=backend, prompt=prompt, format=format, context=context
|
||||
)
|
||||
|
||||
if not hasattr(self, fn):
|
||||
log.error("generate", error=f"Backend {backend} does not support generate")
|
||||
|
||||
# add the function call to the asyncio task queue
|
||||
|
||||
if self.process_in_background:
|
||||
task = asyncio.create_task(getattr(self, fn)(prompt=prompt, format=format))
|
||||
await self.set_background_processing(task)
|
||||
else:
|
||||
await getattr(self, fn)(prompt=prompt, format=format)
|
||||
|
||||
@set_processing
|
||||
async def generate_environment_prompt(self, instructions: str = None):
|
||||
|
||||
response = await Prompt.request(
|
||||
"visual.generate-environment-prompt",
|
||||
self.client,
|
||||
"visualize",
|
||||
{
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
},
|
||||
)
|
||||
|
||||
return response.strip()
|
||||
|
||||
@set_processing
|
||||
async def generate_character_prompt(
|
||||
self, character_name: str, instructions: str = None
|
||||
):
|
||||
|
||||
character = self.scene.get_character(character_name)
|
||||
|
||||
response = await Prompt.request(
|
||||
"visual.generate-character-prompt",
|
||||
self.client,
|
||||
"visualize",
|
||||
{
|
||||
"scene": self.scene,
|
||||
"character_name": character_name,
|
||||
"character": character,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
"instructions": instructions or "",
|
||||
},
|
||||
)
|
||||
|
||||
return response.strip()
|
||||
|
||||
async def generate_environment_background(self, instructions: str = None):
|
||||
with VisualContext(vis_type=VIS_TYPES.ENVIRONMENT, instructions=instructions):
|
||||
await self.generate(format="landscape")
|
||||
|
||||
async def generate_character_portrait(
|
||||
self,
|
||||
character_name: str,
|
||||
instructions: str = None,
|
||||
):
|
||||
with VisualContext(
|
||||
vis_type=VIS_TYPES.CHARACTER,
|
||||
character_name=character_name,
|
||||
instructions=instructions,
|
||||
):
|
||||
await self.generate(format="portrait")
|
||||
|
||||
|
||||
# apply mixins to the agent (from HANDLERS dict[str, cls])
|
||||
|
||||
for mixin_backend, mixin in HANDLERS.items():
|
||||
mixin_cls = mixin["cls"]
|
||||
VisualBase = type("VisualAgent", (mixin_cls, VisualBase), {})
|
||||
|
||||
extend_actions = getattr(mixin_cls, "EXTEND_ACTIONS", {})
|
||||
|
||||
for action_name, action in extend_actions.items():
|
||||
VisualBase.ACTIONS[action_name] = action
|
||||
|
||||
|
||||
@register()
|
||||
class VisualAgent(VisualBase):
|
||||
pass
|
117
src/talemate/agents/visual/automatic1111.py
Normal file
|
@ -0,0 +1,117 @@
|
|||
import base64
|
||||
import io
|
||||
|
||||
import httpx
|
||||
import structlog
|
||||
from PIL import Image
|
||||
|
||||
from talemate.agents.base import (
|
||||
Agent,
|
||||
AgentAction,
|
||||
AgentActionConditional,
|
||||
AgentActionConfig,
|
||||
AgentDetail,
|
||||
set_processing,
|
||||
)
|
||||
|
||||
from .handlers import register
|
||||
from .schema import RenderSettings, Resolution
|
||||
from .style import STYLE_MAP, Style
|
||||
|
||||
log = structlog.get_logger("talemate.agents.visual.automatic1111")
|
||||
|
||||
|
||||
@register(backend_name="automatic1111", label="AUTOMATIC1111")
|
||||
class Automatic1111Mixin:
|
||||
|
||||
automatic1111_default_render_settings = RenderSettings()
|
||||
|
||||
EXTEND_ACTIONS = {
|
||||
"automatic1111": AgentAction(
|
||||
enabled=True,
|
||||
condition=AgentActionConditional(
|
||||
attribute="_config.config.backend", value="automatic1111"
|
||||
),
|
||||
label="Automatic1111 Settings",
|
||||
description="Setting overrides for the automatic1111 backend",
|
||||
config={
|
||||
"api_url": AgentActionConfig(
|
||||
type="text",
|
||||
value="http://localhost:7860",
|
||||
label="API URL",
|
||||
description="The URL of the backend API",
|
||||
),
|
||||
"steps": AgentActionConfig(
|
||||
type="number",
|
||||
value=40,
|
||||
label="Steps",
|
||||
min=5,
|
||||
max=150,
|
||||
step=1,
|
||||
description="number of render steps",
|
||||
),
|
||||
"model_type": AgentActionConfig(
|
||||
type="text",
|
||||
value="sdxl",
|
||||
choices=[
|
||||
{"value": "sdxl", "label": "SDXL"},
|
||||
{"value": "sd15", "label": "SD1.5"},
|
||||
],
|
||||
label="Model Type",
|
||||
description="Right now just differentiates between sdxl and sd15 - affect generation resolution",
|
||||
),
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
@property
|
||||
def automatic1111_render_settings(self):
|
||||
if self.actions["automatic1111"].enabled:
|
||||
return RenderSettings(
|
||||
steps=self.actions["automatic1111"].config["steps"].value,
|
||||
type_model=self.actions["automatic1111"].config["model_type"].value,
|
||||
)
|
||||
else:
|
||||
return self.automatic1111_default_render_settings
|
||||
|
||||
async def automatic1111_generate(self, prompt: Style, format: str):
|
||||
url = self.api_url
|
||||
resolution = self.resolution_from_format(
|
||||
format, self.automatic1111_render_settings.type_model
|
||||
)
|
||||
render_settings = self.automatic1111_render_settings
|
||||
payload = {
|
||||
"prompt": prompt.positive_prompt,
|
||||
"negative_prompt": prompt.negative_prompt,
|
||||
"steps": render_settings.steps,
|
||||
"width": resolution.width,
|
||||
"height": resolution.height,
|
||||
}
|
||||
|
||||
log.info("automatic1111_generate", payload=payload, url=url)
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.post(
|
||||
url=f"{url}/sdapi/v1/txt2img", json=payload, timeout=90
|
||||
)
|
||||
|
||||
r = response.json()
|
||||
|
||||
# image = Image.open(io.BytesIO(base64.b64decode(r['images'][0])))
|
||||
# image.save('a1111-test.png')
|
||||
|
||||
#'log.info("automatic1111_generate", saved_to="a1111-test.png")
|
||||
|
||||
for image in r["images"]:
|
||||
await self.emit_image(image)
|
||||
|
||||
async def automatic1111_ready(self) -> bool:
|
||||
"""
|
||||
Will send a GET to /sdapi/v1/memory and on 200 will return True
|
||||
"""
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.get(
|
||||
url=f"{self.api_url}/sdapi/v1/memory", timeout=2
|
||||
)
|
||||
return response.status_code == 200
|
324
src/talemate/agents/visual/comfyui.py
Normal file
|
@ -0,0 +1,324 @@
|
|||
import asyncio
|
||||
import base64
|
||||
import io
|
||||
import json
|
||||
import os
|
||||
import random
|
||||
import time
|
||||
import urllib.parse
|
||||
|
||||
import httpx
|
||||
import pydantic
|
||||
import structlog
|
||||
from PIL import Image
|
||||
|
||||
from talemate.agents.base import AgentAction, AgentActionConditional, AgentActionConfig
|
||||
|
||||
from .handlers import register
|
||||
from .schema import RenderSettings, Resolution
|
||||
from .style import STYLE_MAP, Style
|
||||
|
||||
log = structlog.get_logger("talemate.agents.visual.comfyui")
|
||||
|
||||
|
||||
class Workflow(pydantic.BaseModel):
|
||||
nodes: dict
|
||||
|
||||
def set_resolution(self, resolution: Resolution):
|
||||
|
||||
# will collect all latent image nodes
|
||||
# if there is multiple will look for the one with the
|
||||
# title "Talemate Resolution"
|
||||
|
||||
# if there is no latent image node with the title "Talemate Resolution"
|
||||
# the first latent image node will be used
|
||||
|
||||
# resolution will be updated on the selected node
|
||||
|
||||
# if no latent image node is found a warning will be logged
|
||||
|
||||
latent_image_node = None
|
||||
|
||||
for node_id, node in self.nodes.items():
|
||||
if node["class_type"] == "EmptyLatentImage":
|
||||
if not latent_image_node:
|
||||
latent_image_node = node
|
||||
elif node["_meta"]["title"] == "Talemate Resolution":
|
||||
latent_image_node = node
|
||||
break
|
||||
|
||||
if not latent_image_node:
|
||||
log.warning("set_resolution", error="No latent image node found")
|
||||
return
|
||||
|
||||
latent_image_node["inputs"]["width"] = resolution.width
|
||||
latent_image_node["inputs"]["height"] = resolution.height
|
||||
|
||||
def set_prompt(self, prompt: str, negative_prompt: str = None):
|
||||
|
||||
# will collect all CLIPTextEncode nodes
|
||||
|
||||
# if there is multiple will look for the one with the
|
||||
# title "Talemate Positive Prompt" and "Talemate Negative Prompt"
|
||||
#
|
||||
# if there is no CLIPTextEncode node with the title "Talemate Positive Prompt"
|
||||
# the first CLIPTextEncode node will be used
|
||||
#
|
||||
# if there is no CLIPTextEncode node with the title "Talemate Negative Prompt"
|
||||
# the second CLIPTextEncode node will be used
|
||||
#
|
||||
# prompt will be updated on the selected node
|
||||
|
||||
# if no CLIPTextEncode node is found an exception will be raised for
|
||||
# the positive prompt
|
||||
|
||||
# if no CLIPTextEncode node is found an exception will be raised for
|
||||
# the negative prompt if it is not None
|
||||
|
||||
positive_prompt_node = None
|
||||
negative_prompt_node = None
|
||||
|
||||
for node_id, node in self.nodes.items():
|
||||
|
||||
if node["class_type"] == "CLIPTextEncode":
|
||||
if not positive_prompt_node:
|
||||
positive_prompt_node = node
|
||||
elif node["_meta"]["title"] == "Talemate Positive Prompt":
|
||||
positive_prompt_node = node
|
||||
elif not negative_prompt_node:
|
||||
negative_prompt_node = node
|
||||
elif node["_meta"]["title"] == "Talemate Negative Prompt":
|
||||
negative_prompt_node = node
|
||||
|
||||
if not positive_prompt_node:
|
||||
raise ValueError("No positive prompt node found")
|
||||
|
||||
positive_prompt_node["inputs"]["text"] = prompt
|
||||
|
||||
if negative_prompt and not negative_prompt_node:
|
||||
raise ValueError("No negative prompt node found")
|
||||
|
||||
if negative_prompt:
|
||||
negative_prompt_node["inputs"]["text"] = negative_prompt
|
||||
|
||||
def set_checkpoint(self, checkpoint: str):
|
||||
|
||||
# will collect all CheckpointLoaderSimple nodes
|
||||
# if there is multiple will look for the one with the
|
||||
# title "Talemate Load Checkpoint"
|
||||
|
||||
# if there is no CheckpointLoaderSimple node with the title "Talemate Load Checkpoint"
|
||||
# the first CheckpointLoaderSimple node will be used
|
||||
|
||||
# checkpoint will be updated on the selected node
|
||||
|
||||
# if no CheckpointLoaderSimple node is found a warning will be logged
|
||||
|
||||
checkpoint_node = None
|
||||
|
||||
for node_id, node in self.nodes.items():
|
||||
if node["class_type"] == "CheckpointLoaderSimple":
|
||||
if not checkpoint_node:
|
||||
checkpoint_node = node
|
||||
elif node["_meta"]["title"] == "Talemate Load Checkpoint":
|
||||
checkpoint_node = node
|
||||
break
|
||||
|
||||
if not checkpoint_node:
|
||||
log.warning("set_checkpoint", error="No checkpoint node found")
|
||||
return
|
||||
|
||||
checkpoint_node["inputs"]["ckpt_name"] = checkpoint
|
||||
|
||||
def set_seeds(self):
|
||||
for node in self.nodes.values():
|
||||
for field in node.get("inputs", {}).keys():
|
||||
if field == "noise_seed":
|
||||
node["inputs"]["noise_seed"] = random.randint(0, 999999999999999)
|
||||
|
||||
|
||||
@register(backend_name="comfyui", label="ComfyUI")
|
||||
class ComfyUIMixin:
|
||||
|
||||
comfyui_default_render_settings = RenderSettings()
|
||||
|
||||
EXTEND_ACTIONS = {
|
||||
"comfyui": AgentAction(
|
||||
enabled=True,
|
||||
condition=AgentActionConditional(
|
||||
attribute="_config.config.backend", value="comfyui"
|
||||
),
|
||||
label="ComfyUI Settings",
|
||||
description="Setting overrides for the comfyui backend",
|
||||
config={
|
||||
"api_url": AgentActionConfig(
|
||||
type="text",
|
||||
value="http://localhost:8188",
|
||||
label="API URL",
|
||||
description="The URL of the backend API",
|
||||
),
|
||||
"workflow": AgentActionConfig(
|
||||
type="text",
|
||||
value="default-sdxl.json",
|
||||
label="Workflow",
|
||||
description="The workflow to use for comfyui (workflow file name inside ./templates/comfyui-workflows)",
|
||||
),
|
||||
"checkpoint": AgentActionConfig(
|
||||
type="text",
|
||||
value="default",
|
||||
label="Checkpoint",
|
||||
choices=[],
|
||||
description="The main checkpoint to use.",
|
||||
),
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
@property
|
||||
def comfyui_workflow_filename(self):
|
||||
base_name = self.actions["comfyui"].config["workflow"].value
|
||||
|
||||
# make absolute path
|
||||
abs_path = os.path.join(
|
||||
os.path.dirname(__file__),
|
||||
"..",
|
||||
"..",
|
||||
"..",
|
||||
"..",
|
||||
"templates",
|
||||
"comfyui-workflows",
|
||||
base_name,
|
||||
)
|
||||
|
||||
return abs_path
|
||||
|
||||
@property
|
||||
def comfyui_workflow_is_sdxl(self) -> bool:
|
||||
"""
|
||||
Returns true if `sdxl` is in worhflow file name (case insensitive)
|
||||
"""
|
||||
|
||||
return "sdxl" in self.comfyui_workflow_filename.lower()
|
||||
|
||||
@property
|
||||
def comfyui_workflow(self) -> Workflow:
|
||||
workflow = self.comfyui_workflow_filename
|
||||
if not workflow:
|
||||
raise ValueError("No comfyui workflow file specified")
|
||||
|
||||
with open(workflow, "r") as f:
|
||||
return Workflow(nodes=json.load(f))
|
||||
|
||||
@property
|
||||
async def comfyui_object_info(self):
|
||||
if hasattr(self, "_comfyui_object_info"):
|
||||
return self._comfyui_object_info
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.get(url=f"{self.api_url}/object_info")
|
||||
self._comfyui_object_info = response.json()
|
||||
|
||||
return self._comfyui_object_info
|
||||
|
||||
@property
|
||||
async def comfyui_checkpoints(self):
|
||||
loader_node = (await self.comfyui_object_info)["CheckpointLoaderSimple"]
|
||||
_checkpoints = loader_node["input"]["required"]["ckpt_name"][0]
|
||||
return [
|
||||
{"label": checkpoint, "value": checkpoint} for checkpoint in _checkpoints
|
||||
]
|
||||
|
||||
async def comfyui_get_image(self, filename: str, subfolder: str, folder_type: str):
|
||||
data = {"filename": filename, "subfolder": subfolder, "type": folder_type}
|
||||
url_values = urllib.parse.urlencode(data)
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.get(url=f"{self.api_url}/view?{url_values}")
|
||||
return response.content
|
||||
|
||||
async def comfyui_get_history(self, prompt_id: str):
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.get(url=f"{self.api_url}/history/{prompt_id}")
|
||||
return response.json()
|
||||
|
||||
async def comfyui_get_images(self, prompt_id: str, max_wait: int = 60.0):
|
||||
output_images = {}
|
||||
history = {}
|
||||
|
||||
start = time.time()
|
||||
|
||||
while not history:
|
||||
log.info(
|
||||
"comfyui_get_images", waiting_for_history=True, prompt_id=prompt_id
|
||||
)
|
||||
history = await self.comfyui_get_history(prompt_id)
|
||||
await asyncio.sleep(1.0)
|
||||
if time.time() - start > max_wait:
|
||||
raise TimeoutError("Max wait time exceeded")
|
||||
|
||||
for node_id, node_output in history[prompt_id]["outputs"].items():
|
||||
if "images" in node_output:
|
||||
images_output = []
|
||||
for image in node_output["images"]:
|
||||
image_data = await self.comfyui_get_image(
|
||||
image["filename"], image["subfolder"], image["type"]
|
||||
)
|
||||
images_output.append(image_data)
|
||||
output_images[node_id] = images_output
|
||||
|
||||
return output_images
|
||||
|
||||
async def comfyui_generate(self, prompt: Style, format: str):
|
||||
url = self.api_url
|
||||
workflow = self.comfyui_workflow
|
||||
is_sdxl = self.comfyui_workflow_is_sdxl
|
||||
|
||||
resolution = self.resolution_from_format(format, "sdxl" if is_sdxl else "sd15")
|
||||
|
||||
workflow.set_resolution(resolution)
|
||||
workflow.set_prompt(prompt.positive_prompt, prompt.negative_prompt)
|
||||
workflow.set_seeds()
|
||||
workflow.set_checkpoint(self.actions["comfyui"].config["checkpoint"].value)
|
||||
|
||||
payload = {"prompt": workflow.model_dump().get("nodes")}
|
||||
|
||||
log.info("comfyui_generate", payload=payload, url=url)
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.post(url=f"{url}/prompt", json=payload, timeout=90)
|
||||
|
||||
log.info("comfyui_generate", response=response.text)
|
||||
|
||||
r = response.json()
|
||||
|
||||
prompt_id = r["prompt_id"]
|
||||
|
||||
images = await self.comfyui_get_images(prompt_id)
|
||||
for node_id, node_images in images.items():
|
||||
for i, image in enumerate(node_images):
|
||||
await self.emit_image(base64.b64encode(image).decode("utf-8"))
|
||||
# image = Image.open(io.BytesIO(image))
|
||||
# image.save(f'comfyui-test.png')
|
||||
|
||||
async def comfyui_apply_config(
|
||||
self, backend_changed: bool = False, *args, **kwargs
|
||||
):
|
||||
log.debug(
|
||||
"comfyui_apply_config",
|
||||
backend_changed=backend_changed,
|
||||
enabled=self.enabled,
|
||||
)
|
||||
if (not self.initialized or backend_changed) and self.enabled:
|
||||
checkpoints = await self.comfyui_checkpoints
|
||||
selected_checkpoint = self.actions["comfyui"].config["checkpoint"].value
|
||||
self.actions["comfyui"].config["checkpoint"].choices = checkpoints
|
||||
self.actions["comfyui"].config["checkpoint"].value = selected_checkpoint
|
||||
|
||||
async def comfyui_ready(self) -> bool:
|
||||
"""
|
||||
Will send a GET to /system_stats and on 200 will return True
|
||||
"""
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.get(url=f"{self.api_url}/system_stats", timeout=2)
|
||||
return response.status_code == 200
|
68
src/talemate/agents/visual/commands.py
Normal file
|
@ -0,0 +1,68 @@
|
|||
from talemate.agents.visual.context import VIS_TYPES, VisualContext
|
||||
from talemate.commands.base import TalemateCommand
|
||||
from talemate.commands.manager import register
|
||||
from talemate.instance import get_agent
|
||||
|
||||
__all__ = [
|
||||
"CmdVisualizeTestGenerate",
|
||||
]
|
||||
|
||||
|
||||
@register
|
||||
class CmdVisualizeTestGenerate(TalemateCommand):
|
||||
"""
|
||||
Generates a visual test
|
||||
"""
|
||||
|
||||
name = "visual_test_generate"
|
||||
description = "Will generate a visual test"
|
||||
aliases = ["vis_test", "vtg"]
|
||||
|
||||
label = "Visualize test"
|
||||
|
||||
async def run(self):
|
||||
visual = get_agent("visual")
|
||||
prompt = self.args[0]
|
||||
with VisualContext(vis_type=VIS_TYPES.UNSPECIFIED):
|
||||
await visual.generate(prompt)
|
||||
return True
|
||||
|
||||
|
||||
@register
|
||||
class CmdVisualizeEnvironment(TalemateCommand):
|
||||
"""
|
||||
Shows the environment
|
||||
"""
|
||||
|
||||
name = "visual_environment"
|
||||
description = "Will show the environment"
|
||||
aliases = ["vis_env"]
|
||||
|
||||
label = "Visualize environment"
|
||||
|
||||
async def run(self):
|
||||
visual = get_agent("visual")
|
||||
await visual.generate_environment_background(
|
||||
instructions=self.args[0] if len(self.args) > 0 else None
|
||||
)
|
||||
return True
|
||||
|
||||
|
||||
@register
|
||||
class CmdVisualizeCharacter(TalemateCommand):
|
||||
"""
|
||||
Shows a character
|
||||
"""
|
||||
|
||||
name = "visual_character"
|
||||
description = "Will show a character"
|
||||
aliases = ["vis_char"]
|
||||
|
||||
label = "Visualize character"
|
||||
|
||||
async def run(self):
|
||||
visual = get_agent("visual")
|
||||
character_name = self.args[0]
|
||||
instructions = self.args[1] if len(self.args) > 1 else None
|
||||
await visual.generate_character_portrait(character_name, instructions)
|
||||
return True
|
55
src/talemate/agents/visual/context.py
Normal file
|
@ -0,0 +1,55 @@
|
|||
import contextvars
|
||||
import enum
|
||||
from typing import Union
|
||||
|
||||
import pydantic
|
||||
|
||||
__all__ = [
|
||||
"VIS_TYPES",
|
||||
"visual_context",
|
||||
"VisualContext",
|
||||
]
|
||||
|
||||
|
||||
class VIS_TYPES(str, enum.Enum):
|
||||
UNSPECIFIED = "UNSPECIFIED"
|
||||
ENVIRONMENT = "ENVIRONMENT"
|
||||
CHARACTER = "CHARACTER"
|
||||
ITEM = "ITEM"
|
||||
|
||||
|
||||
visual_context = contextvars.ContextVar("visual_context", default=None)
|
||||
|
||||
|
||||
class VisualContextState(pydantic.BaseModel):
|
||||
character_name: Union[str, None] = None
|
||||
instructions: Union[str, None] = None
|
||||
vis_type: VIS_TYPES = VIS_TYPES.ENVIRONMENT
|
||||
prompt: Union[str, None] = None
|
||||
prepared_prompt: Union[str, None] = None
|
||||
format: Union[str, None] = None
|
||||
|
||||
|
||||
class VisualContext:
|
||||
def __init__(
|
||||
self,
|
||||
character_name: Union[str, None] = None,
|
||||
instructions: Union[str, None] = None,
|
||||
vis_type: VIS_TYPES = VIS_TYPES.ENVIRONMENT,
|
||||
prompt: Union[str, None] = None,
|
||||
**kwargs,
|
||||
):
|
||||
self.state = VisualContextState(
|
||||
character_name=character_name,
|
||||
instructions=instructions,
|
||||
vis_type=vis_type,
|
||||
prompt=prompt,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
def __enter__(self):
|
||||
self.token = visual_context.set(self.state)
|
||||
|
||||
def __exit__(self, *args, **kwargs):
|
||||
visual_context.reset(self.token)
|
||||
return False
|
17
src/talemate/agents/visual/handlers.py
Normal file
|
@ -0,0 +1,17 @@
|
|||
__all__ = [
|
||||
"HANDLERS",
|
||||
"register",
|
||||
]
|
||||
|
||||
HANDLERS = {}
|
||||
|
||||
|
||||
class register:
|
||||
|
||||
def __init__(self, backend_name: str, label: str):
|
||||
self.backend_name = backend_name
|
||||
self.label = label
|
||||
|
||||
def __call__(self, mixin_cls):
|
||||
HANDLERS[self.backend_name] = {"label": self.label, "cls": mixin_cls}
|
||||
return mixin_cls
|
127
src/talemate/agents/visual/openai_image.py
Normal file
|
@ -0,0 +1,127 @@
|
|||
import base64
|
||||
import io
|
||||
|
||||
import httpx
|
||||
import structlog
|
||||
from openai import AsyncOpenAI
|
||||
from PIL import Image
|
||||
|
||||
from talemate.agents.base import (
|
||||
Agent,
|
||||
AgentAction,
|
||||
AgentActionConditional,
|
||||
AgentActionConfig,
|
||||
AgentDetail,
|
||||
set_processing,
|
||||
)
|
||||
|
||||
from .handlers import register
|
||||
from .schema import RenderSettings, Resolution
|
||||
from .style import STYLE_MAP, Style
|
||||
|
||||
log = structlog.get_logger("talemate.agents.visual.openai_image")
|
||||
|
||||
|
||||
@register(backend_name="openai_image", label="OpenAI")
|
||||
class OpenAIImageMixin:
|
||||
|
||||
openai_image_default_render_settings = RenderSettings()
|
||||
|
||||
EXTEND_ACTIONS = {
|
||||
"openai_image": AgentAction(
|
||||
enabled=False,
|
||||
condition=AgentActionConditional(
|
||||
attribute="_config.config.backend", value="openai_image"
|
||||
),
|
||||
label="OpenAI Image Generation Advanced Settings",
|
||||
description="Setting overrides for the openai backend",
|
||||
config={
|
||||
"model_type": AgentActionConfig(
|
||||
type="text",
|
||||
value="dall-e-3",
|
||||
choices=[
|
||||
{"value": "dall-e-3", "label": "DALL-E 3"},
|
||||
{"value": "dall-e-2", "label": "DALL-E 2"},
|
||||
],
|
||||
label="Model Type",
|
||||
description="Image generation model",
|
||||
),
|
||||
"quality": AgentActionConfig(
|
||||
type="text",
|
||||
value="standard",
|
||||
choices=[
|
||||
{"value": "standard", "label": "Standard"},
|
||||
{"value": "hd", "label": "HD"},
|
||||
],
|
||||
label="Quality",
|
||||
description="Image generation quality",
|
||||
),
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
@property
|
||||
def openai_api_key(self):
|
||||
return self.config.get("openai", {}).get("api_key")
|
||||
|
||||
@property
|
||||
def openai_model_type(self):
|
||||
return self.actions["openai_image"].config["model_type"].value
|
||||
|
||||
@property
|
||||
def openai_quality(self):
|
||||
return self.actions["openai_image"].config["quality"].value
|
||||
|
||||
async def openai_image_generate(self, prompt: Style, format: str):
|
||||
"""
|
||||
#
|
||||
from openai import OpenAI
|
||||
client = OpenAI()
|
||||
|
||||
response = client.images.generate(
|
||||
model="dall-e-3",
|
||||
prompt="a white siamese cat",
|
||||
size="1024x1024",
|
||||
quality="standard",
|
||||
n=1,
|
||||
)
|
||||
|
||||
image_url = response.data[0].url
|
||||
"""
|
||||
|
||||
client = AsyncOpenAI(api_key=self.openai_api_key)
|
||||
|
||||
# When using DALL·E 3, images can have a size of 1024x1024, 1024x1792 or 1792x1024 pixels.#
|
||||
|
||||
if format == "portrait":
|
||||
resolution = Resolution(width=1024, height=1792)
|
||||
elif format == "landscape":
|
||||
resolution = Resolution(width=1792, height=1024)
|
||||
else:
|
||||
resolution = Resolution(width=1024, height=1024)
|
||||
|
||||
response = await client.images.generate(
|
||||
model=self.openai_model_type,
|
||||
prompt=prompt.positive_prompt,
|
||||
size=f"{resolution.width}x{resolution.height}",
|
||||
quality=self.openai_quality,
|
||||
n=1,
|
||||
)
|
||||
|
||||
download_url = response.data[0].url
|
||||
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.get(download_url, timeout=90)
|
||||
# bytes to base64encoded
|
||||
image = base64.b64encode(response.content).decode("utf-8")
|
||||
await self.emit_image(image)
|
||||
|
||||
async def openai_image_ready(self) -> bool:
|
||||
"""
|
||||
Will send a GET to /sdapi/v1/memory and on 200 will return True
|
||||
"""
|
||||
|
||||
if not self.openai_api_key:
|
||||
raise ValueError("OpenAI API Key not set")
|
||||
|
||||
return True
|
32
src/talemate/agents/visual/schema.py
Normal file
|
@ -0,0 +1,32 @@
|
|||
import pydantic
|
||||
|
||||
__all__ = [
|
||||
"RenderSettings",
|
||||
"Resolution",
|
||||
"RESOLUTION_MAP",
|
||||
]
|
||||
|
||||
RESOLUTION_MAP = {}
|
||||
|
||||
|
||||
class RenderSettings(pydantic.BaseModel):
|
||||
type_model: str = "sdxl"
|
||||
steps: int = 40
|
||||
|
||||
|
||||
class Resolution(pydantic.BaseModel):
|
||||
width: int
|
||||
height: int
|
||||
|
||||
|
||||
RESOLUTION_MAP["sdxl"] = {
|
||||
"portrait": Resolution(width=832, height=1216),
|
||||
"landscape": Resolution(width=1216, height=832),
|
||||
"square": Resolution(width=1024, height=1024),
|
||||
}
|
||||
|
||||
RESOLUTION_MAP["sd15"] = {
|
||||
"portrait": Resolution(width=512, height=768),
|
||||
"landscape": Resolution(width=768, height=512),
|
||||
"square": Resolution(width=768, height=768),
|
||||
}
|
112
src/talemate/agents/visual/style.py
Normal file
|
@ -0,0 +1,112 @@
|
|||
import pydantic
|
||||
|
||||
__all__ = [
|
||||
"Style",
|
||||
"STYLE_MAP",
|
||||
"THEME_MAP",
|
||||
"MAJOR_STYLES",
|
||||
"combine_styles",
|
||||
]
|
||||
|
||||
STYLE_MAP = {}
|
||||
THEME_MAP = {}
|
||||
MAJOR_STYLES = {}
|
||||
|
||||
|
||||
class Style(pydantic.BaseModel):
|
||||
keywords: list[str] = pydantic.Field(default_factory=list)
|
||||
negative_keywords: list[str] = pydantic.Field(default_factory=list)
|
||||
|
||||
@property
|
||||
def positive_prompt(self):
|
||||
return ", ".join(self.keywords)
|
||||
|
||||
@property
|
||||
def negative_prompt(self):
|
||||
return ", ".join(self.negative_keywords)
|
||||
|
||||
def __str__(self):
|
||||
return f"POSITIVE: {self.positive_prompt}\nNEGATIVE: {self.negative_prompt}"
|
||||
|
||||
def load(self, prompt: str, negative_prompt: str = ""):
|
||||
self.keywords = prompt.split(", ")
|
||||
self.negative_keywords = negative_prompt.split(", ")
|
||||
return self
|
||||
|
||||
def prepend(self, *styles):
|
||||
for style in styles:
|
||||
for idx in range(len(style.keywords) - 1, -1, -1):
|
||||
kw = style.keywords[idx]
|
||||
if kw not in self.keywords:
|
||||
self.keywords.insert(0, kw)
|
||||
|
||||
for idx in range(len(style.negative_keywords) - 1, -1, -1):
|
||||
kw = style.negative_keywords[idx]
|
||||
if kw not in self.negative_keywords:
|
||||
self.negative_keywords.insert(0, kw)
|
||||
|
||||
return self
|
||||
|
||||
def append(self, *styles):
|
||||
for style in styles:
|
||||
for kw in style.keywords:
|
||||
if kw not in self.keywords:
|
||||
self.keywords.append(kw)
|
||||
|
||||
for kw in style.negative_keywords:
|
||||
if kw not in self.negative_keywords:
|
||||
self.negative_keywords.append(kw)
|
||||
|
||||
return self
|
||||
|
||||
def copy(self):
|
||||
return Style(
|
||||
keywords=self.keywords.copy(),
|
||||
negative_keywords=self.negative_keywords.copy(),
|
||||
)
|
||||
|
||||
|
||||
# Almost taken straight from some of the fooocus style presets, credit goes to the original author
|
||||
|
||||
STYLE_MAP["digital_art"] = Style(
|
||||
keywords="digital artwork, masterpiece, best quality, high detail".split(", "),
|
||||
negative_keywords="text, watermark, low quality, blurry, photo".split(", "),
|
||||
)
|
||||
|
||||
STYLE_MAP["concept_art"] = Style(
|
||||
keywords="concept art, conceptual sketch, masterpiece, best quality, high detail".split(
|
||||
", "
|
||||
),
|
||||
negative_keywords="text, watermark, low quality, blurry, photo".split(", "),
|
||||
)
|
||||
|
||||
STYLE_MAP["ink_illustration"] = Style(
|
||||
keywords="ink illustration, painting, masterpiece, best quality".split(", "),
|
||||
negative_keywords="text, watermark, low quality, blurry, photo".split(", "),
|
||||
)
|
||||
|
||||
STYLE_MAP["anime"] = Style(
|
||||
keywords="anime, masterpiece, best quality, illustration".split(", "),
|
||||
negative_keywords="text, watermark, low quality, blurry, photo, 3d".split(", "),
|
||||
)
|
||||
|
||||
STYLE_MAP["character_portrait"] = Style(keywords="solo, looking at viewer".split(", "))
|
||||
|
||||
STYLE_MAP["environment"] = Style(
|
||||
keywords="scenery, environment, background, postcard".split(", "),
|
||||
negative_keywords="character, portrait, looking at viewer, people".split(", "),
|
||||
)
|
||||
|
||||
MAJOR_STYLES = [
|
||||
{"value": "digital_art", "label": "Digital Art"},
|
||||
{"value": "concept_art", "label": "Concept Art"},
|
||||
{"value": "ink_illustration", "label": "Ink Illustration"},
|
||||
{"value": "anime", "label": "Anime"},
|
||||
]
|
||||
|
||||
|
||||
def combine_styles(*styles):
|
||||
keywords = []
|
||||
for style in styles:
|
||||
keywords.extend(style.keywords)
|
||||
return Style(keywords=list(set(keywords)))
|
84
src/talemate/agents/visual/websocket_handler.py
Normal file
|
@ -0,0 +1,84 @@
|
|||
from typing import Union
|
||||
|
||||
import pydantic
|
||||
import structlog
|
||||
|
||||
from talemate.instance import get_agent
|
||||
from talemate.server.websocket_plugin import Plugin
|
||||
|
||||
from .context import VisualContext, VisualContextState
|
||||
|
||||
__all__ = [
|
||||
"VisualWebsocketHandler",
|
||||
]
|
||||
|
||||
log = structlog.get_logger("talemate.server.visual")
|
||||
|
||||
|
||||
class SetCoverImagePayload(pydantic.BaseModel):
|
||||
base64: str
|
||||
context: Union[VisualContextState, None] = None
|
||||
|
||||
|
||||
class RegeneratePayload(pydantic.BaseModel):
|
||||
context: Union[VisualContextState, None] = None
|
||||
|
||||
|
||||
class VisualWebsocketHandler(Plugin):
|
||||
router = "visual"
|
||||
|
||||
async def handle_regenerate(self, data: dict):
|
||||
"""
|
||||
Regenerate the image based on the context.
|
||||
"""
|
||||
|
||||
payload = RegeneratePayload(**data)
|
||||
|
||||
context = payload.context
|
||||
|
||||
visual = get_agent("visual")
|
||||
|
||||
with VisualContext(**context.model_dump()):
|
||||
await visual.generate(format="")
|
||||
|
||||
async def handle_cover_image(self, data: dict):
|
||||
"""
|
||||
Sets the cover image for a character and the scene.
|
||||
"""
|
||||
|
||||
payload = SetCoverImagePayload(**data)
|
||||
|
||||
context = payload.context
|
||||
scene = self.scene
|
||||
|
||||
if context and context.character_name:
|
||||
|
||||
character = scene.get_character(context.character_name)
|
||||
|
||||
if not character:
|
||||
log.error("character not found", character_name=context.character_name)
|
||||
return
|
||||
|
||||
asset = scene.assets.add_asset_from_image_data(payload.base64)
|
||||
|
||||
log.info("setting scene cover image", character_name=context.character_name)
|
||||
scene.assets.cover_image = asset.id
|
||||
|
||||
log.info(
|
||||
"setting character cover image", character_name=context.character_name
|
||||
)
|
||||
character.cover_image = asset.id
|
||||
|
||||
scene.emit_status()
|
||||
self.websocket_handler.request_scene_assets([asset.id])
|
||||
|
||||
self.websocket_handler.queue_put(
|
||||
{
|
||||
"type": "scene_asset_character_cover_image",
|
||||
"asset_id": asset.id,
|
||||
"asset": self.scene.assets.get_asset_bytes_as_base64(asset.id),
|
||||
"media_type": asset.media_type,
|
||||
"character": character.name,
|
||||
}
|
||||
)
|
||||
return
|
|
@ -527,10 +527,15 @@ class WorldStateAgent(Agent):
|
|||
if reset and reinforcement.insert == "sequential":
|
||||
self.scene.pop_history(typ="reinforcement", source=source, all=True)
|
||||
|
||||
if reinforcement.insert == "sequential":
|
||||
kind = "analyze_freeform_medium_short"
|
||||
else:
|
||||
kind = "analyze_freeform"
|
||||
|
||||
answer = await Prompt.request(
|
||||
"world_state.update-reinforcements",
|
||||
self.client,
|
||||
"analyze_freeform",
|
||||
kind,
|
||||
vars={
|
||||
"scene": self.scene,
|
||||
"max_tokens": self.client.max_token_length,
|
||||
|
@ -546,6 +551,13 @@ class WorldStateAgent(Agent):
|
|||
},
|
||||
)
|
||||
|
||||
# sequential reinforcment should be single sentence so we
|
||||
# split on line breaks and take the first line in case the
|
||||
# LLM did not understand the request and returned a longer response
|
||||
|
||||
if reinforcement.insert == "sequential":
|
||||
answer = answer.split("\n")[0]
|
||||
|
||||
reinforcement.answer = answer
|
||||
reinforcement.due = reinforcement.interval
|
||||
|
||||
|
|
|
@ -58,6 +58,14 @@ class Defaults(pydantic.BaseModel):
|
|||
max_token_length: int = 4096
|
||||
|
||||
|
||||
class ExtraField(pydantic.BaseModel):
|
||||
name: str
|
||||
type: str
|
||||
label: str
|
||||
required: bool
|
||||
description: str
|
||||
|
||||
|
||||
class ClientBase:
|
||||
api_url: str
|
||||
model_name: str
|
||||
|
@ -91,7 +99,9 @@ class ClientBase:
|
|||
self.name = name or self.client_type
|
||||
self.log = structlog.get_logger(f"client.{self.client_type}")
|
||||
if "max_token_length" in kwargs:
|
||||
self.max_token_length = kwargs["max_token_length"]
|
||||
self.max_token_length = (
|
||||
int(kwargs["max_token_length"]) if kwargs["max_token_length"] else 4096
|
||||
)
|
||||
self.set_client(max_token_length=self.max_token_length)
|
||||
|
||||
def __str__(self):
|
||||
|
@ -135,7 +145,7 @@ class ClientBase:
|
|||
self.api_url = kwargs["api_url"]
|
||||
|
||||
if kwargs.get("max_token_length"):
|
||||
self.max_token_length = kwargs["max_token_length"]
|
||||
self.max_token_length = int(kwargs["max_token_length"])
|
||||
|
||||
if "enabled" in kwargs:
|
||||
self.enabled = bool(kwargs["enabled"])
|
||||
|
@ -193,6 +203,8 @@ class ClientBase:
|
|||
return system_prompts.ANALYST
|
||||
if "summarize" in kind:
|
||||
return system_prompts.SUMMARIZE
|
||||
if "visualize" in kind:
|
||||
return system_prompts.VISUALIZE
|
||||
|
||||
else:
|
||||
|
||||
|
@ -220,6 +232,8 @@ class ClientBase:
|
|||
return system_prompts.ANALYST_NO_DECENSOR
|
||||
if "summarize" in kind:
|
||||
return system_prompts.SUMMARIZE_NO_DECENSOR
|
||||
if "visualize" in kind:
|
||||
return system_prompts.VISUALIZE_NO_DECENSOR
|
||||
|
||||
return system_prompts.BASIC
|
||||
|
||||
|
@ -249,13 +263,7 @@ class ClientBase:
|
|||
|
||||
prompt_template_example, prompt_template_file = self.prompt_template_example()
|
||||
|
||||
emit(
|
||||
"client_status",
|
||||
message=self.client_type,
|
||||
id=self.name,
|
||||
details=model_name,
|
||||
status=status,
|
||||
data={
|
||||
data = {
|
||||
"api_key": self.api_key,
|
||||
"prompt_template_example": prompt_template_example,
|
||||
"has_prompt_template": (
|
||||
|
@ -264,7 +272,18 @@ class ClientBase:
|
|||
"template_file": prompt_template_file,
|
||||
"meta": self.Meta().model_dump(),
|
||||
"error_action": None,
|
||||
},
|
||||
}
|
||||
|
||||
for field_name in getattr(self.Meta(), "extra_fields", {}).keys():
|
||||
data[field_name] = getattr(self, field_name, None)
|
||||
|
||||
emit(
|
||||
"client_status",
|
||||
message=self.client_type,
|
||||
id=self.name,
|
||||
details=model_name,
|
||||
status=status,
|
||||
data=data,
|
||||
)
|
||||
|
||||
if status_change:
|
||||
|
|
|
@ -177,6 +177,9 @@ class OpenAIClient(ClientBase):
|
|||
if not self.model_name:
|
||||
self.model_name = "gpt-3.5-turbo-16k"
|
||||
|
||||
if max_token_length and not isinstance(max_token_length, int):
|
||||
max_token_length = int(max_token_length)
|
||||
|
||||
model = self.model_name
|
||||
|
||||
self.client = AsyncOpenAI(api_key=self.openai_api_key)
|
||||
|
|
|
@ -1,10 +1,13 @@
|
|||
import pydantic
|
||||
import structlog
|
||||
from openai import AsyncOpenAI, NotFoundError, PermissionDeniedError
|
||||
|
||||
from talemate.client.base import ClientBase
|
||||
from talemate.client.registry import register
|
||||
from talemate.emit import emit
|
||||
|
||||
log = structlog.get_logger("talemate.client.openai_compat")
|
||||
|
||||
EXPERIMENTAL_DESCRIPTION = """Use this client if you want to connect to a service implementing an OpenAI-compatible API. Success is going to depend on the level of compatibility. Use the actual OpenAI client if you want to connect to OpenAI's API."""
|
||||
|
||||
|
||||
|
@ -28,8 +31,9 @@ class OpenAICompatibleClient(ClientBase):
|
|||
manual_model: bool = True
|
||||
defaults: Defaults = Defaults()
|
||||
|
||||
def __init__(self, model=None, **kwargs):
|
||||
def __init__(self, model=None, api_key=None, **kwargs):
|
||||
self.model_name = model
|
||||
self.api_key = api_key
|
||||
super().__init__(**kwargs)
|
||||
|
||||
@property
|
||||
|
@ -37,8 +41,13 @@ class OpenAICompatibleClient(ClientBase):
|
|||
return EXPERIMENTAL_DESCRIPTION
|
||||
|
||||
def set_client(self, **kwargs):
|
||||
self.api_key = kwargs.get("api_key")
|
||||
self.client = AsyncOpenAI(base_url=self.api_url + "/v1", api_key=self.api_key)
|
||||
self.api_key = kwargs.get("api_key", self.api_key)
|
||||
|
||||
url = self.api_url
|
||||
if not url.endswith("/v1"):
|
||||
url = url + "/v1"
|
||||
|
||||
self.client = AsyncOpenAI(base_url=url, api_key=self.api_key)
|
||||
self.model_name = (
|
||||
kwargs.get("model") or kwargs.get("model_name") or self.model_name
|
||||
)
|
||||
|
@ -48,7 +57,7 @@ class OpenAICompatibleClient(ClientBase):
|
|||
|
||||
keys = list(parameters.keys())
|
||||
|
||||
valid_keys = ["temperature", "top_p"]
|
||||
valid_keys = ["temperature", "top_p", "max_tokens"]
|
||||
|
||||
for key in keys:
|
||||
if key not in valid_keys:
|
||||
|
@ -106,8 +115,12 @@ class OpenAICompatibleClient(ClientBase):
|
|||
if "api_url" in kwargs:
|
||||
self.api_url = kwargs["api_url"]
|
||||
if "max_token_length" in kwargs:
|
||||
self.max_token_length = kwargs["max_token_length"]
|
||||
self.max_token_length = (
|
||||
int(kwargs["max_token_length"]) if kwargs["max_token_length"] else 4096
|
||||
)
|
||||
if "api_key" in kwargs:
|
||||
self.api_auth = kwargs["api_key"]
|
||||
|
||||
log.warning("reconfigure", kwargs=kwargs)
|
||||
|
||||
self.set_client(**kwargs)
|
||||
|
|
|
@ -121,62 +121,62 @@ def preset_for_kind(kind: str):
|
|||
return PRESET_DIVINE_INTELLECT # Assuming adding detail uses the same preset as divine intellect
|
||||
elif kind == "edit_fix_exposition":
|
||||
return PRESET_DIVINE_INTELLECT # Assuming fixing exposition uses the same preset as divine intellect
|
||||
elif kind == "visualize":
|
||||
return PRESET_SIMPLE_1
|
||||
else:
|
||||
return PRESET_SIMPLE_1 # Default preset if none of the kinds match
|
||||
|
||||
|
||||
def max_tokens_for_kind(kind: str, total_budget: int):
|
||||
if kind == "conversation":
|
||||
return 75 # Example value, adjust as needed
|
||||
return 75
|
||||
elif kind == "conversation_old":
|
||||
return 75 # Example value, adjust as needed
|
||||
return 75
|
||||
elif kind == "conversation_long":
|
||||
return 300 # Example value, adjust as needed
|
||||
return 300
|
||||
elif kind == "conversation_select_talking_actor":
|
||||
return 30 # Example value, adjust as needed
|
||||
return 30
|
||||
elif kind == "summarize":
|
||||
return 500 # Example value, adjust as needed
|
||||
return 500
|
||||
elif kind == "analyze":
|
||||
return 500 # Example value, adjust as needed
|
||||
return 500
|
||||
elif kind == "analyze_creative":
|
||||
return 1024 # Example value, adjust as needed
|
||||
return 1024
|
||||
elif kind == "analyze_long":
|
||||
return 2048 # Example value, adjust as needed
|
||||
return 2048
|
||||
elif kind == "analyze_freeform":
|
||||
return 500 # Example value, adjust as needed
|
||||
return 500
|
||||
elif kind == "analyze_freeform_medium":
|
||||
return 192
|
||||
elif kind == "analyze_freeform_medium_short":
|
||||
return 128
|
||||
elif kind == "analyze_freeform_short":
|
||||
return 10 # Example value, adjust as needed
|
||||
return 10
|
||||
elif kind == "narrate":
|
||||
return 500 # Example value, adjust as needed
|
||||
return 500
|
||||
elif kind == "story":
|
||||
return 300 # Example value, adjust as needed
|
||||
return 300
|
||||
elif kind == "create":
|
||||
return min(
|
||||
1024, int(total_budget * 0.35)
|
||||
) # Example calculation, adjust as needed
|
||||
return min(1024, int(total_budget * 0.35))
|
||||
elif kind == "create_concise":
|
||||
return min(
|
||||
400, int(total_budget * 0.25)
|
||||
) # Example calculation, adjust as needed
|
||||
return min(400, int(total_budget * 0.25))
|
||||
elif kind == "create_precise":
|
||||
return min(
|
||||
400, int(total_budget * 0.25)
|
||||
) # Example calculation, adjust as needed
|
||||
return min(400, int(total_budget * 0.25))
|
||||
elif kind == "create_short":
|
||||
return 25
|
||||
elif kind == "director":
|
||||
return min(
|
||||
192, int(total_budget * 0.25)
|
||||
) # Example calculation, adjust as needed
|
||||
return min(192, int(total_budget * 0.25))
|
||||
elif kind == "director_short":
|
||||
return 25 # Example value, adjust as needed
|
||||
return 25
|
||||
elif kind == "director_yesno":
|
||||
return 2 # Example value, adjust as needed
|
||||
return 2
|
||||
elif kind == "edit_dialogue":
|
||||
return 100 # Example value, adjust as needed
|
||||
return 100
|
||||
elif kind == "edit_add_detail":
|
||||
return 200 # Example value, adjust as needed
|
||||
return 200
|
||||
elif kind == "edit_fix_exposition":
|
||||
return 1024 # Example value, adjust as needed
|
||||
return 1024
|
||||
elif kind == "visualize":
|
||||
return 150
|
||||
else:
|
||||
return 150 # Default value if none of the kinds match
|
||||
|
|
|
@ -20,6 +20,8 @@ WORLD_STATE = str(Prompt.get("world_state.system-analyst"))
|
|||
|
||||
SUMMARIZE = str(Prompt.get("summarizer.system"))
|
||||
|
||||
VISUALIZE = str(Prompt.get("visual.system"))
|
||||
|
||||
# CAREBEAR PROMPTS
|
||||
|
||||
ROLEPLAY_NO_DECENSOR = str(Prompt.get("conversation.system-no-decensor"))
|
||||
|
@ -41,3 +43,5 @@ EDITOR_NO_DECENSOR = str(Prompt.get("editor.system-no-decensor"))
|
|||
WORLD_STATE_NO_DECENSOR = str(Prompt.get("world_state.system-analyst-no-decensor"))
|
||||
|
||||
SUMMARIZE_NO_DECENSOR = str(Prompt.get("summarizer.system-no-decensor"))
|
||||
|
||||
VISUALIZE_NO_DECENSOR = str(Prompt.get("visual.system-no-decensor"))
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
import random
|
||||
import re
|
||||
|
||||
import httpx
|
||||
import structlog
|
||||
|
@ -28,20 +29,23 @@ class TextGeneratorWebuiClient(ClientBase):
|
|||
parameters["stop"] = parameters["stopping_strings"]
|
||||
|
||||
# Half temperature on -Yi- models
|
||||
if (
|
||||
self.model_name
|
||||
and "-yi-" in self.model_name.lower()
|
||||
and parameters["temperature"] > 0.1
|
||||
):
|
||||
parameters["temperature"] = parameters["temperature"] / 2
|
||||
if self.model_name and self.is_yi_model():
|
||||
parameters["smoothing_factor"] = 0.3
|
||||
# also half the temperature
|
||||
parameters["temperature"] = max(0.1, parameters["temperature"] / 2)
|
||||
log.debug(
|
||||
"halfing temperature for -yi- model",
|
||||
temperature=parameters["temperature"],
|
||||
"applying temperature smoothing for Yi model",
|
||||
)
|
||||
|
||||
def set_client(self, **kwargs):
|
||||
self.client = AsyncOpenAI(base_url=self.api_url + "/v1", api_key="sk-1111")
|
||||
|
||||
def is_yi_model(self):
|
||||
model_name = self.model_name.lower()
|
||||
# regex match for yi encased by non-word characters
|
||||
|
||||
return bool(re.search(r"[\-_]yi[\-_]", model_name))
|
||||
|
||||
async def get_model_name(self):
|
||||
async with httpx.AsyncClient() as client:
|
||||
response = await client.get(
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
import datetime
|
||||
import os
|
||||
from typing import TYPE_CHECKING, ClassVar, Dict, Optional, Union
|
||||
from typing import TYPE_CHECKING, ClassVar, Dict, Optional, TypeVar, Union
|
||||
|
||||
import pydantic
|
||||
import structlog
|
||||
|
@ -40,6 +40,9 @@ class Client(BaseModel):
|
|||
extra = "ignore"
|
||||
|
||||
|
||||
ClientType = TypeVar("ClientType", bound=Client)
|
||||
|
||||
|
||||
class AgentActionConfig(BaseModel):
|
||||
value: Union[int, float, str, bool, None] = None
|
||||
|
||||
|
@ -259,7 +262,8 @@ class RecentScenes(BaseModel):
|
|||
|
||||
|
||||
class Config(BaseModel):
|
||||
clients: Dict[str, Client] = {}
|
||||
clients: Dict[str, ClientType] = {}
|
||||
|
||||
game: Game
|
||||
|
||||
agents: Dict[str, Agent] = {}
|
||||
|
@ -297,6 +301,19 @@ class SceneAssetUpload(BaseModel):
|
|||
content: str = None
|
||||
|
||||
|
||||
def prepare_client_config(clients: dict) -> dict:
|
||||
# client's can specify a custom config model in
|
||||
# client_cls.config_cls so we need to convert the
|
||||
# client config to the correct model
|
||||
|
||||
for client_name, client_config in clients.items():
|
||||
client_cls = get_client_class(client_config.get("type"))
|
||||
if client_cls:
|
||||
config_cls = getattr(client_cls, "config_cls", None)
|
||||
if config_cls:
|
||||
clients[client_name] = config_cls(**client_config)
|
||||
|
||||
|
||||
def load_config(
|
||||
file_path: str = "./config.yaml", as_model: bool = False
|
||||
) -> Union[dict, Config]:
|
||||
|
@ -311,6 +328,7 @@ def load_config(
|
|||
config_data = yaml.safe_load(file)
|
||||
|
||||
try:
|
||||
prepare_client_config(config_data.get("clients", {}))
|
||||
config = Config(**config_data)
|
||||
config.recent_scenes.clean()
|
||||
except pydantic.ValidationError as e:
|
||||
|
@ -336,6 +354,7 @@ def save_config(config, file_path: str = "./config.yaml"):
|
|||
elif isinstance(config, dict):
|
||||
# validate
|
||||
try:
|
||||
prepare_client_config(config.get("clients", {}))
|
||||
config = Config(**config).model_dump(exclude_none=True)
|
||||
except pydantic.ValidationError as e:
|
||||
log.error("config validation", error=e)
|
||||
|
|
|
@ -38,6 +38,8 @@ class Emission:
|
|||
id: str = None
|
||||
details: str = None
|
||||
data: dict = None
|
||||
websocket_passthrough: bool = False
|
||||
meta: dict = dataclasses.field(default_factory=dict)
|
||||
|
||||
|
||||
def emit(
|
||||
|
@ -125,8 +127,9 @@ class Receiver:
|
|||
def handle(self, emission: Emission):
|
||||
fn = getattr(self, f"handle_{emission.typ}", None)
|
||||
if not fn:
|
||||
return
|
||||
return False
|
||||
fn(emission)
|
||||
return True
|
||||
|
||||
def connect(self):
|
||||
for typ in handlers:
|
||||
|
|
|
@ -34,6 +34,8 @@ MessageEdited = signal("message_edited")
|
|||
|
||||
ConfigSaved = signal("config_saved")
|
||||
|
||||
ImageGenerated = signal("image_generated")
|
||||
|
||||
handlers = {
|
||||
"system": SystemMessage,
|
||||
"narrator": NarratorMessage,
|
||||
|
@ -60,4 +62,5 @@ handlers = {
|
|||
"audio_queue": AudioQueue,
|
||||
"config_saved": ConfigSaved,
|
||||
"status": StatusMessage,
|
||||
"image_generated": ImageGenerated,
|
||||
}
|
||||
|
|
|
@ -163,14 +163,9 @@ def emit_agent_status(cls, agent=None):
|
|||
data=cls.config_options(),
|
||||
)
|
||||
else:
|
||||
emit(
|
||||
"agent_status",
|
||||
message=agent.verbose_name or "",
|
||||
status=agent.status,
|
||||
id=agent.agent_type,
|
||||
details=agent.agent_details,
|
||||
data=cls.config_options(agent=agent),
|
||||
)
|
||||
asyncio.create_task(agent.emit_status())
|
||||
# loop = asyncio.get_event_loop()
|
||||
# loop.run_until_complete(agent.emit_status())
|
||||
|
||||
|
||||
def emit_agents_status(*args, **kwargs):
|
||||
|
@ -178,9 +173,17 @@ def emit_agents_status(*args, **kwargs):
|
|||
Will emit status of all agents
|
||||
"""
|
||||
# log.debug("emit", type="agent status")
|
||||
for typ, cls in agents.AGENT_CLASSES.items():
|
||||
for typ, cls in sorted(
|
||||
agents.AGENT_CLASSES.items(), key=lambda x: x[1].verbose_name
|
||||
):
|
||||
agent = AGENTS.get(typ)
|
||||
emit_agent_status(cls, agent)
|
||||
|
||||
|
||||
handlers["request_agent_status"].connect(emit_agents_status)
|
||||
|
||||
|
||||
async def agent_ready_checks():
|
||||
for agent in AGENTS.values():
|
||||
if agent and agent.enabled:
|
||||
await agent.ready_check()
|
||||
|
|
|
@ -3,6 +3,11 @@
|
|||
{%- with memory_query=query -%}
|
||||
{% include "extra-context.jinja2" %}
|
||||
{% endwith -%}
|
||||
{% set related_character = scene.parse_character_from_line(query) -%}
|
||||
{% if related_character -%}
|
||||
<|SECTION:{{ related_character.name|upper }}|>
|
||||
{{ related_character.sheet}}
|
||||
{% endif %}
|
||||
<|CLOSE_SECTION|>
|
||||
{% endblock %}
|
||||
{% set scene_history=scene.context_history(budget=max_tokens-200-count_tokens(self.rendered_context())) %}
|
||||
|
|
|
@ -9,7 +9,7 @@
|
|||
{% endfor %}
|
||||
<|CLOSE_SECTION|>
|
||||
<|SECTION:TASK|>
|
||||
Paraphrase the following text to make it fit the narrative tone. Keep the information and the meaning the same, but change the wording and sentence structure.
|
||||
Paraphrase the following text to fit the narrative thus far. Keep the information and the meaning the same, but change the wording and sentence structure.
|
||||
|
||||
Text to paraphrase:
|
||||
|
||||
|
|
29
src/talemate/prompts/templates/visual/extra-context.jinja2
Normal file
|
@ -0,0 +1,29 @@
|
|||
Scenario Premise:
|
||||
{{ scene.description }}
|
||||
|
||||
Content Context: This is a specific scene from {{ scene.context }}
|
||||
|
||||
{% block rendered_context_static %}
|
||||
{# GENERAL REINFORCEMENTS #}
|
||||
{% set general_reinforcements = scene.world_state.filter_reinforcements(insert=['all-context']) %}
|
||||
{%- for reinforce in general_reinforcements %}
|
||||
{{ reinforce.as_context_line|condensed }}
|
||||
|
||||
{% endfor %}
|
||||
{# END GENERAL REINFORCEMENTS #}
|
||||
{# ACTIVE PINS #}
|
||||
{%- for pin in scene.active_pins %}
|
||||
{{ pin.time_aware_text|condensed }}
|
||||
|
||||
{% endfor %}
|
||||
{# END ACTIVE PINS #}
|
||||
{% endblock %}
|
||||
|
||||
{# MEMORY #}
|
||||
{%- if memory_query %}
|
||||
{%- for memory in query_memory(memory_query, as_question_answer=False, max_tokens=max_tokens-500-count_tokens(self.rendered_context_static()), iterate=10) -%}
|
||||
{{ memory|condensed }}
|
||||
|
||||
{% endfor -%}
|
||||
{% endif -%}
|
||||
{# END MEMORY #}
|
|
@ -0,0 +1,28 @@
|
|||
{{ query_scene("What is "+character.name+"'s age, race, and physical appearance?", full_context) }}
|
||||
|
||||
{{ query_scene("What clothes is "+character.name+" currently wearing? Provide a detailed description.", full_context) }}
|
||||
|
||||
{{ query_scene("What is "+character.name+"'s current scene description?", full_context) }}
|
||||
|
||||
{{ query_scene("Where is "+character.name+" currently at? Briefly describe the environment and provide genre context.", full_context) }}
|
||||
{% set emotion = scene.world_state.character_emotion(character.name) %}
|
||||
{% if emotion %}{{ character.name }}'s current emotion: {{ emotion }}{% endif %}
|
||||
<|SECTION:TASK|>
|
||||
{% if instructions %}Requested Image: {{ instructions }}{% endif %}
|
||||
|
||||
Describe the scene to the painter to ensure he will capture all the important details when drawing a dynamic and truthful image of {{ character.name }}.
|
||||
|
||||
Include details about the {{ character.name }}'s appearance exactly as they are, and {{ character.name }}'s current pose.
|
||||
Include a description of the environment.
|
||||
|
||||
THE IMAGE MUST ONLY INCLUDE {{ character.name }} EXCLUDE ALL OTHER CHARACTERS.
|
||||
YOU MUST ONLY DESCRIBE WHAT IS CURRENTLY VISIBLE IN THE SCENE.
|
||||
|
||||
Required information: name, age, race, gender, physique, expression, pose, clothes/equipment, hair style, hair color, skin color, eyes, scars, tattoos, piercings, a fitting color scheme and any other relevant details.
|
||||
|
||||
You must provide your answer as a comma delimited list of keywords.
|
||||
Keywords should be ordered: physical appearance, emotion, action, environment, color scheme.
|
||||
You must provide many keywords to describe the character and the environment in great detail.
|
||||
Your answer must be suitable as a stable-diffusion image generation prompt.
|
||||
<|CLOSE_SECTION|>
|
||||
{{ set_prepared_response(character.name+",")}}
|
|
@ -0,0 +1,18 @@
|
|||
{% set scene_context = scene.context_history(budget=max_tokens-2048)|join("\n") %}
|
||||
{{ query_text("What does the current environmemt look like? Include details about appearance, theme and vibes.", scene_context) }}
|
||||
<|SECTION:TASK|>
|
||||
{% if instructions %}Requested Image: {{ instructions }}{% endif %}
|
||||
|
||||
Describe the scene to the painter to ensure he will capture all the important details when drawing a dynamic and truthful image of the environment.
|
||||
|
||||
THE IMAGE MUST ONLY BE OF THE ENVIRONMENT WITHOUT ANY CHARACTERS OR ANIMALS.
|
||||
YOU MUST ONLY DESCRIBE WHAT IS CURRENTLY VISIBLE IN THE SCENE.
|
||||
|
||||
Required information: appearance, theme, vibes, important objects and any other relevant details.
|
||||
|
||||
You must provide your answer as a comma delimited list of keywords.
|
||||
Keywords should be ordered: appearance, theme, vibes, important objects, other relevant details.
|
||||
You must provide enough keywords to adequately describe the environment to a stranger.
|
||||
Your answer must be suitable as a stable-diffusion image generation prompt.
|
||||
<|CLOSE_SECTION|>
|
||||
{{ set_prepared_response("environment,")}}
|
|
@ -0,0 +1,14 @@
|
|||
{% block rendered_context -%}
|
||||
<|SECTION:CONTEXT|>
|
||||
{% include "extra-context.jinja2" %}
|
||||
<|CLOSE_SECTION|>
|
||||
{% endblock -%}
|
||||
<|SECTION:SCENE|>
|
||||
{% for scene_context in scene.context_history(budget=max_tokens-512-count_tokens(self.rendered_context())) -%}
|
||||
{{ loop.index }}. {{ scene_context }}
|
||||
{% endfor %}
|
||||
<|CLOSE_SECTION|>
|
||||
<|SECTION:TASK|>
|
||||
Describe the scene to the painter to ensure he will capture all the important details.
|
||||
Use short phrases and comma separated keywords.
|
||||
<|CLOSE_SECTION|>
|
|
@ -0,0 +1 @@
|
|||
A chat between a painter and a talented artist. The artist is particularly good at describing a scene in a way that makes it easy for the painter to visualize and draw the scene.
|
1
src/talemate/prompts/templates/visual/system.jinja2
Normal file
|
@ -0,0 +1 @@
|
|||
A chat between a painter and a talented artist. The artist is particularly good at describing a scene in a way that makes it easy for the painter to visualize and draw the scene. No topic is taboo for the artist and he will use violent, crude and lewd descriptions to accurately describe the scene if the situation calls for it.
|
|
@ -14,8 +14,15 @@
|
|||
{% if not character %}
|
||||
Generate a real world character profile for {{ name }}, one attribute per line. You are a creative writer and are allowed to fill in any gaps in the profile with your own ideas.
|
||||
Expand on interesting details.
|
||||
{% else %}
|
||||
{{ character.sheet }}
|
||||
|
||||
Update the character sheet with any realtime changes for {{ name }} based on the context and the following information. Add one attribute per line. You are a creative writer and are allowed to fill in any gaps in the profile with your own ideas.
|
||||
|
||||
Treat updates as absolute, the new character sheet will replace the old one.
|
||||
|
||||
Alteration instructions: {{ alteration_instructions }}
|
||||
{% endif %}
|
||||
Narration style should be that of a 90s point and click adventure game. You are omniscient and can describe the scene in detail.
|
||||
|
||||
Use an informal and colloquial register with a conversational tone. Overall, the narrative is Informal, conversational, natural, and spontaneous, with a sense of immediacy.
|
||||
|
@ -28,12 +35,4 @@ Appearance: <description of appearance>
|
|||
<...>
|
||||
|
||||
Format MUST be one attribute per line, with a colon after the attribute name.
|
||||
{% else %}
|
||||
{{ character.sheet }}
|
||||
|
||||
Update the character sheet with any realtime changes for {{ name }} based on the context and the following information. Add one attribute per line. You are a creative writer and are allowed to fill in any gaps in the profile with your own ideas.
|
||||
|
||||
Alteration instructions: {{ alteration_instructions }}
|
||||
|
||||
{% endif %}
|
||||
{{ set_prepared_response("Name: "+name+"\nAge:") }}
|
|
@ -51,6 +51,8 @@ Required response: a complete and valid JSON response according to the JSON exam
|
|||
|
||||
characters should have the following attributes: `emotion`, `snapshot`
|
||||
items should have the following attributes: `snapshot`
|
||||
item keys must be reader friendly, so "Item name" instead of "item_name".
|
||||
|
||||
<|CLOSE_SECTION|>
|
||||
<|SECTION:UPDATED WORLD STATE|>
|
||||
{{ set_json_response(dict(characters={"name":{}}), cutoff=3) }}
|
|
@ -43,6 +43,7 @@ Required response: a complete and valid JSON response according to the JSON exam
|
|||
|
||||
characters should habe the following attributes: `name`, `emotion`, `snapshot`
|
||||
items should have the following attributes: `name`, `snapshot`
|
||||
item keys must be reader friendly, so "Item name" instead of "item_name".
|
||||
|
||||
You must not copy the example, write your own descriptions.
|
||||
<|CLOSE_SECTION|>
|
||||
|
|
|
@ -35,9 +35,9 @@ Use your imagination to fill in gaps in order to answer the question in a confid
|
|||
You are omniscient and can describe the scene in detail.
|
||||
|
||||
{% if reinforcement.insert == 'sequential' %}
|
||||
PROVIDE A SUCCINCT ANSWER TO THE QUESTION.
|
||||
YOUR ANSWER MUST BE SHORT AND TO THE POINT.
|
||||
YOUR ANSWER MUST BE A SINGLE SENTENCE.
|
||||
YOUR RESPONSE MUST BE ONLY THE ANSWER TO THE QUESTION. NEVER EXPLAIN YOUR REASONING.
|
||||
{% endif %}
|
||||
{% if instructions %}
|
||||
{{ instructions }}
|
||||
|
@ -65,7 +65,6 @@ You are omniscient and can describe the scene in detail.
|
|||
{% if reinforcement.insert == 'sequential' %}
|
||||
YOUR ANSWER MUST BE SHORT AND TO THE POINT.
|
||||
YOUR ANSWER MUST BE A SINGLE SENTENCE.
|
||||
YOUR RESPONSE MUST BE ONLY THE ANSWER TO THE QUESTION. NEVER EXPLAIN YOUR REASONING.
|
||||
{% endif %}
|
||||
{% if instructions %}
|
||||
{{ instructions }}
|
||||
|
|
|
@ -42,6 +42,7 @@ async def websocket_endpoint(websocket, path):
|
|||
async def send_status():
|
||||
while True:
|
||||
await instance.emit_clients_status()
|
||||
await instance.agent_ready_checks()
|
||||
await asyncio.sleep(3)
|
||||
|
||||
send_status_task = asyncio.create_task(send_status())
|
||||
|
@ -116,9 +117,9 @@ async def websocket_endpoint(websocket, path):
|
|||
query = data.get("query", "")
|
||||
handler.request_scenes_list(query)
|
||||
elif action_type == "configure_clients":
|
||||
handler.configure_clients(data.get("clients"))
|
||||
await handler.configure_clients(data.get("clients"))
|
||||
elif action_type == "configure_agents":
|
||||
handler.configure_agents(data.get("agents"))
|
||||
await handler.configure_agents(data.get("agents"))
|
||||
elif action_type == "request_client_status":
|
||||
await handler.request_client_status()
|
||||
elif action_type == "delete_message":
|
||||
|
|
26
src/talemate/server/websocket_plugin.py
Normal file
|
@ -0,0 +1,26 @@
|
|||
import structlog
|
||||
|
||||
__all__ = [
|
||||
"Plugin",
|
||||
]
|
||||
|
||||
log = structlog.get_logger("talemate.server.visual")
|
||||
|
||||
|
||||
class Plugin:
|
||||
router = "router"
|
||||
|
||||
@property
|
||||
def scene(self):
|
||||
return self.websocket_handler.scene
|
||||
|
||||
def __init__(self, websocket_handler):
|
||||
self.websocket_handler = websocket_handler
|
||||
|
||||
async def handle(self, data: dict):
|
||||
log.info(f"{self.router} action", action=data.get("action"))
|
||||
fn = getattr(self, f"handle_{data.get('action')}", None)
|
||||
if fn is None:
|
||||
return
|
||||
|
||||
await fn(data)
|
|
@ -28,6 +28,10 @@ from talemate.server import (
|
|||
world_state_manager,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"WebsocketHandler",
|
||||
]
|
||||
|
||||
log = structlog.get_logger("talemate.server.websocket_server")
|
||||
|
||||
AGENT_INSTANCES = {}
|
||||
|
@ -54,7 +58,9 @@ class WebsocketHandler(Receiver):
|
|||
# to connect signals handlers to the websocket handler
|
||||
self.connect()
|
||||
|
||||
self.connect_llm_clients()
|
||||
# connect LLM clients
|
||||
loop = asyncio.get_event_loop()
|
||||
loop.run_until_complete(self.connect_llm_clients())
|
||||
|
||||
self.routes = {
|
||||
assistant.AssistantPlugin.router: assistant.AssistantPlugin(self),
|
||||
|
@ -77,10 +83,24 @@ class WebsocketHandler(Receiver):
|
|||
devtools.DevToolsPlugin.router: devtools.DevToolsPlugin(self),
|
||||
}
|
||||
|
||||
self.set_agent_routers()
|
||||
|
||||
# self.request_scenes_list()
|
||||
|
||||
# instance.emit_clients_status()
|
||||
|
||||
def set_agent_routers(self):
|
||||
|
||||
for agent_type, agent in instance.AGENTS.items():
|
||||
handler_cls = getattr(agent, "websocket_handler", None)
|
||||
if not handler_cls:
|
||||
continue
|
||||
|
||||
log.info(
|
||||
"Setting agent router", agent_type=agent_type, router=handler_cls.router
|
||||
)
|
||||
self.routes[handler_cls.router] = handler_cls(self)
|
||||
|
||||
def disconnect(self):
|
||||
super().disconnect()
|
||||
abort_wait_for_input()
|
||||
|
@ -89,7 +109,7 @@ class WebsocketHandler(Receiver):
|
|||
if memory_agent and self.scene:
|
||||
memory_agent.close_db(self.scene)
|
||||
|
||||
def connect_llm_clients(self):
|
||||
async def connect_llm_clients(self):
|
||||
client = None
|
||||
|
||||
for client_name, client_config in self.llm_clients.items():
|
||||
|
@ -108,9 +128,9 @@ class WebsocketHandler(Receiver):
|
|||
client_type=client.client_type,
|
||||
)
|
||||
|
||||
self.connect_agents()
|
||||
await self.connect_agents()
|
||||
|
||||
def connect_agents(self):
|
||||
async def connect_agents(self):
|
||||
if not self.llm_clients:
|
||||
instance.emit_agents_status()
|
||||
return
|
||||
|
@ -130,7 +150,7 @@ class WebsocketHandler(Receiver):
|
|||
log.debug("Linked agent", agent_typ=agent_typ, client=client.name)
|
||||
agent = instance.get_agent(agent_typ, client=client)
|
||||
agent.client = client
|
||||
agent.apply_config(**agent_config)
|
||||
await agent.apply_config(**agent_config)
|
||||
|
||||
instance.emit_agents_status()
|
||||
|
||||
|
@ -188,7 +208,7 @@ class WebsocketHandler(Receiver):
|
|||
# Schedule the put coroutine to run as soon as possible
|
||||
loop.call_soon_threadsafe(lambda: self.out_queue.put_nowait(data))
|
||||
|
||||
def configure_clients(self, clients):
|
||||
async def configure_clients(self, clients):
|
||||
existing = set(self.llm_clients.keys())
|
||||
|
||||
self.llm_clients = {}
|
||||
|
@ -208,7 +228,9 @@ class WebsocketHandler(Receiver):
|
|||
"type": client["type"],
|
||||
}
|
||||
for dfl_key in client_cls.Meta().defaults.dict().keys():
|
||||
client_config[dfl_key] = client.get(dfl_key)
|
||||
client_config[dfl_key] = client.get(
|
||||
dfl_key, client.get("data", {}).get(dfl_key)
|
||||
)
|
||||
|
||||
# find clients that have been removed
|
||||
removed = existing - set(self.llm_clients.keys())
|
||||
|
@ -230,12 +252,12 @@ class WebsocketHandler(Receiver):
|
|||
|
||||
self.config["clients"] = self.llm_clients
|
||||
|
||||
self.connect_llm_clients()
|
||||
await self.connect_llm_clients()
|
||||
save_config(self.config)
|
||||
|
||||
instance.sync_emit_clients_status()
|
||||
|
||||
def configure_agents(self, agents):
|
||||
async def configure_agents(self, agents):
|
||||
self.agents = {typ: {} for typ in instance.agent_types()}
|
||||
|
||||
log.debug("Configuring agents")
|
||||
|
@ -255,23 +277,31 @@ class WebsocketHandler(Receiver):
|
|||
if getattr(agent_instance, "actions", None):
|
||||
self.agents[name]["actions"] = agent.get("actions", {})
|
||||
|
||||
agent_instance.apply_config(**self.agents[name])
|
||||
await agent_instance.apply_config(**self.agents[name])
|
||||
log.debug("Configured agent", name=name)
|
||||
continue
|
||||
|
||||
if name not in self.agents:
|
||||
continue
|
||||
|
||||
if agent["client"] not in self.llm_clients:
|
||||
if isinstance(agent["client"], dict):
|
||||
try:
|
||||
client_name = agent["client"]["client"]["value"]
|
||||
except KeyError:
|
||||
continue
|
||||
else:
|
||||
client_name = agent["client"]
|
||||
|
||||
if client_name not in self.llm_clients:
|
||||
continue
|
||||
|
||||
self.agents[name] = {
|
||||
"client": self.llm_clients[agent["client"]]["name"],
|
||||
"client": self.llm_clients[client_name]["name"],
|
||||
"name": name,
|
||||
}
|
||||
|
||||
agent_instance = instance.get_agent(name, **self.agents[name])
|
||||
agent_instance.client = self.llm_clients[agent["client"]]["client"]
|
||||
agent_instance.client = self.llm_clients[client_name]["client"]
|
||||
|
||||
if agent_instance.has_toggle:
|
||||
self.agents[name]["enabled"] = agent["enabled"]
|
||||
|
@ -279,13 +309,13 @@ class WebsocketHandler(Receiver):
|
|||
if getattr(agent_instance, "actions", None):
|
||||
self.agents[name]["actions"] = agent.get("actions", {})
|
||||
|
||||
agent_instance.apply_config(**self.agents[name])
|
||||
await agent_instance.apply_config(**self.agents[name])
|
||||
|
||||
log.debug(
|
||||
"Configured agent",
|
||||
name=name,
|
||||
client_name=self.llm_clients[agent["client"]]["name"],
|
||||
client=self.llm_clients[agent["client"]]["client"],
|
||||
client_name=self.llm_clients[client_name]["name"],
|
||||
client=self.llm_clients[client_name]["client"],
|
||||
)
|
||||
|
||||
self.config["agents"] = self.agents
|
||||
|
@ -293,6 +323,24 @@ class WebsocketHandler(Receiver):
|
|||
|
||||
instance.emit_agents_status()
|
||||
|
||||
def handle(self, emission: Emission):
|
||||
called = super().handle(emission)
|
||||
|
||||
if called is False and emission.websocket_passthrough:
|
||||
log.debug(
|
||||
"emission passthrough", emission=emission.message, typ=emission.typ
|
||||
)
|
||||
try:
|
||||
self.queue_put(
|
||||
{
|
||||
"type": emission.typ,
|
||||
"message": emission.message,
|
||||
"data": emission.data,
|
||||
}
|
||||
)
|
||||
except Exception as e:
|
||||
log.error("emission passthrough", error=traceback.format_exc())
|
||||
|
||||
def handle_system(self, emission: Emission):
|
||||
self.queue_put(
|
||||
{
|
||||
|
@ -457,6 +505,7 @@ class WebsocketHandler(Receiver):
|
|||
"name": emission.id,
|
||||
"status": emission.status,
|
||||
"data": emission.data,
|
||||
"meta": emission.meta,
|
||||
}
|
||||
)
|
||||
|
||||
|
|
|
@ -24,8 +24,8 @@ import talemate.util as util
|
|||
from talemate.client.context import ClientContext, ConversationContext
|
||||
from talemate.config import Config, SceneConfig, load_config
|
||||
from talemate.context import rerun_context
|
||||
from talemate.emit import Emitter, emit, wait_for_input
|
||||
from talemate.emit.signals import ConfigSaved, handlers
|
||||
from talemate.emit import Emission, Emitter, emit, wait_for_input
|
||||
from talemate.emit.signals import ConfigSaved, ImageGenerated, handlers
|
||||
from talemate.exceptions import (
|
||||
ExitScene,
|
||||
LLMAccuracyError,
|
||||
|
@ -151,6 +151,12 @@ class Character:
|
|||
|
||||
return random.choice(self.example_dialogue)
|
||||
|
||||
def set_cover_image(self, asset_id: str, initial_only: bool = False):
|
||||
if self.cover_image and initial_only:
|
||||
return
|
||||
|
||||
self.cover_image = asset_id
|
||||
|
||||
def sheet_filtered(self, *exclude):
|
||||
|
||||
sheet = self.base_attributes or {
|
||||
|
@ -264,8 +270,9 @@ class Character:
|
|||
for k, v in self.base_attributes.items():
|
||||
if isinstance(v, str):
|
||||
self.base_attributes[k] = v.replace(f"{orig_name}", self.name)
|
||||
for i, v in enumerate(self.details):
|
||||
for i, v in list(self.details.items()):
|
||||
self.details[i] = v.replace(f"{orig_name}", self.name)
|
||||
self.memory_dirty = True
|
||||
|
||||
def load_from_image_metadata(self, image_path: str, file_format: str):
|
||||
"""
|
||||
|
@ -354,6 +361,8 @@ class Character:
|
|||
for key, value in kwargs.items():
|
||||
setattr(self, key, value)
|
||||
|
||||
self.memory_dirty = True
|
||||
|
||||
async def commit_to_memory(self, memory_agent):
|
||||
"""
|
||||
Commits this character's details to the memory agent. (vectordb)
|
||||
|
@ -895,7 +904,7 @@ class Scene(Emitter):
|
|||
def __del__(self):
|
||||
self.disconnect()
|
||||
|
||||
def on_config_saved(self, event: ConfigSaved):
|
||||
def on_config_saved(self, event):
|
||||
self.config = event.data
|
||||
self.emit_status()
|
||||
|
||||
|
@ -1216,6 +1225,15 @@ class Scene(Emitter):
|
|||
def num_npc_characters(self) -> int:
|
||||
return len(list(self.get_npc_characters()))
|
||||
|
||||
def parse_character_from_line(self, line: str) -> Character:
|
||||
"""
|
||||
Parse a character from a line of text
|
||||
"""
|
||||
|
||||
for actor in self.actors:
|
||||
if actor.character.name.lower() in line.lower():
|
||||
return actor.character
|
||||
|
||||
def get_characters(self) -> Generator[Character, None, None]:
|
||||
"""
|
||||
Returns a list of all characters in the scene
|
||||
|
@ -1428,7 +1446,7 @@ class Scene(Emitter):
|
|||
async def _rerun_narrator_message(self, message):
|
||||
emit("remove_message", "", id=message.id)
|
||||
source, arg = (
|
||||
message.source.split(":")
|
||||
message.source.split(":", 1)
|
||||
if message.source and ":" in message.source
|
||||
else (message.source, None)
|
||||
)
|
||||
|
|
|
@ -515,3 +515,9 @@ class WorldState(BaseModel):
|
|||
for manual_context in self.manual_context.values()
|
||||
if manual_context.meta.get("typ") == "world_state"
|
||||
}
|
||||
|
||||
def character_emotion(self, character_name: str) -> str:
|
||||
if character_name in self.characters:
|
||||
return self.characters[character_name].emotion
|
||||
|
||||
return None
|
||||
|
|
41
talemate_frontend/package-lock.json
generated
|
@ -10,6 +10,7 @@
|
|||
"dependencies": {
|
||||
"@mdi/font": "7.4.47",
|
||||
"core-js": "^3.8.3",
|
||||
"dot-prop": "^8.0.2",
|
||||
"roboto-fontface": "*",
|
||||
"vue": "^3.2.13",
|
||||
"vuetify": "^3.5.0",
|
||||
|
@ -4914,6 +4915,31 @@
|
|||
"tslib": "^2.0.3"
|
||||
}
|
||||
},
|
||||
"node_modules/dot-prop": {
|
||||
"version": "8.0.2",
|
||||
"resolved": "https://registry.npmjs.org/dot-prop/-/dot-prop-8.0.2.tgz",
|
||||
"integrity": "sha512-xaBe6ZT4DHPkg0k4Ytbvn5xoxgpG0jOS1dYxSOwAHPuNLjP3/OzN0gH55SrLqpx8cBfSaVt91lXYkApjb+nYdQ==",
|
||||
"dependencies": {
|
||||
"type-fest": "^3.8.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=16"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/dot-prop/node_modules/type-fest": {
|
||||
"version": "3.13.1",
|
||||
"resolved": "https://registry.npmjs.org/type-fest/-/type-fest-3.13.1.tgz",
|
||||
"integrity": "sha512-tLq3bSNx+xSpwvAJnzrK0Ep5CLNWjvFTOp71URMaAEWBfRb9nnJiBoUe0tF8bI4ZFO3omgBR6NvnbzVUT3Ly4g==",
|
||||
"engines": {
|
||||
"node": ">=14.16"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/dotenv": {
|
||||
"version": "10.0.0",
|
||||
"resolved": "https://registry.npmmirror.com/dotenv/-/dotenv-10.0.0.tgz",
|
||||
|
@ -14998,6 +15024,21 @@
|
|||
"tslib": "^2.0.3"
|
||||
}
|
||||
},
|
||||
"dot-prop": {
|
||||
"version": "8.0.2",
|
||||
"resolved": "https://registry.npmjs.org/dot-prop/-/dot-prop-8.0.2.tgz",
|
||||
"integrity": "sha512-xaBe6ZT4DHPkg0k4Ytbvn5xoxgpG0jOS1dYxSOwAHPuNLjP3/OzN0gH55SrLqpx8cBfSaVt91lXYkApjb+nYdQ==",
|
||||
"requires": {
|
||||
"type-fest": "^3.8.0"
|
||||
},
|
||||
"dependencies": {
|
||||
"type-fest": {
|
||||
"version": "3.13.1",
|
||||
"resolved": "https://registry.npmjs.org/type-fest/-/type-fest-3.13.1.tgz",
|
||||
"integrity": "sha512-tLq3bSNx+xSpwvAJnzrK0Ep5CLNWjvFTOp71URMaAEWBfRb9nnJiBoUe0tF8bI4ZFO3omgBR6NvnbzVUT3Ly4g=="
|
||||
}
|
||||
}
|
||||
},
|
||||
"dotenv": {
|
||||
"version": "10.0.0",
|
||||
"resolved": "https://registry.npmmirror.com/dotenv/-/dotenv-10.0.0.tgz",
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "talemate_frontend",
|
||||
"version": "0.19.0",
|
||||
"version": "0.20.0",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"serve": "vue-cli-service serve",
|
||||
|
@ -10,6 +10,7 @@
|
|||
"dependencies": {
|
||||
"@mdi/font": "7.4.47",
|
||||
"core-js": "^3.8.3",
|
||||
"dot-prop": "^8.0.2",
|
||||
"roboto-fontface": "*",
|
||||
"vue": "^3.2.13",
|
||||
"vuetify": "^3.5.0",
|
||||
|
|
|
@ -1,10 +1,12 @@
|
|||
<template>
|
||||
<div v-if="isConnected()">
|
||||
<v-list v-for="(agent, index) in state.agents" :key="index">
|
||||
<v-list v-for="(agent, index) in state.agents" :key="index" density="compact">
|
||||
<v-list-item @click="editAgent(index)">
|
||||
<v-list-item-title>
|
||||
<v-progress-circular v-if="agent.status === 'busy'" indeterminate="disable-shrink" color="primary"
|
||||
size="14"></v-progress-circular>
|
||||
<v-progress-circular v-else-if="agent.status === 'busy_bg'" indeterminate="disable-shrink" color="secondary"
|
||||
size="14"></v-progress-circular>
|
||||
<v-icon v-else-if="agent.status === 'uninitialized'" color="orange" size="14">mdi-checkbox-blank-circle</v-icon>
|
||||
<v-icon v-else-if="agent.status === 'disabled'" color="grey-darken-2" size="14">mdi-checkbox-blank-circle</v-icon>
|
||||
<v-icon v-else-if="agent.status === 'error'" color="red-darken-1" size="14">mdi-checkbox-blank-circle</v-icon>
|
||||
|
@ -18,9 +20,42 @@
|
|||
</template>
|
||||
</v-tooltip>
|
||||
</v-list-item-title>
|
||||
<v-list-item-subtitle class="text-caption">
|
||||
{{ agent.client }}
|
||||
</v-list-item-subtitle>
|
||||
<div v-if="typeof(agent.client) === 'string'">
|
||||
<v-chip prepend-icon="mdi-network-outline" class="mr-1" size="x-small" color="grey" variant="tonal" label>{{ agent.client }}</v-chip>
|
||||
<!--
|
||||
<v-icon color="grey" size="x-small" v-bind="props">mdi-network-outline</v-icon>
|
||||
<span class="ml-1 text-caption text-bold text-grey-lighten-1">{{ agent.client }}</span>
|
||||
-->
|
||||
|
||||
</div>
|
||||
<div v-else-if="typeof(agent.client) === 'object'">
|
||||
<v-tooltip v-for="(detail, key) in agent.client" :key="key" :text="detail.description" >
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-chip
|
||||
class="mr-1"
|
||||
size="x-small"
|
||||
v-bind="props"
|
||||
:prepend-icon="detail.icon"
|
||||
label
|
||||
:color="detail.color || 'grey'"
|
||||
variant="tonal"
|
||||
>
|
||||
{{ detail.value }}
|
||||
</v-chip>
|
||||
</template>
|
||||
</v-tooltip>
|
||||
|
||||
<!--
|
||||
<div v-for="(detail, key) in agent.client" :key="key">
|
||||
<v-tooltip :text="detail.description" v-if="detail.icon != null">
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-icon color="grey" size="x-small" v-bind="props">{{ detail.icon }}</v-icon>
|
||||
</template>
|
||||
</v-tooltip>
|
||||
<span class="ml-1 text-caption text-bold text-grey-lighten-1">{{ detail.value }}</span>
|
||||
</div>
|
||||
-->
|
||||
</div>
|
||||
<!--
|
||||
<v-chip class="mr-1" v-if="agent.status === 'disabled'" size="x-small">Disabled</v-chip>
|
||||
<v-chip v-if="agent.data.experimental" color="warning" size="x-small">experimental</v-chip>
|
||||
|
@ -74,7 +109,7 @@ export default {
|
|||
for(let i = 0; i < this.state.agents.length; i++) {
|
||||
let agent = this.state.agents[i];
|
||||
|
||||
if(!agent.data.requires_llm_client)
|
||||
if(!agent.data.requires_llm_client || agent.meta.essential === false)
|
||||
continue
|
||||
|
||||
if(agent.status === 'warning' || agent.status === 'error' || agent.status === 'uninitialized') {
|
||||
|
@ -133,21 +168,36 @@ export default {
|
|||
// Find the client with the given name
|
||||
const agent = this.state.agents.find(agent => agent.name === data.name);
|
||||
if (agent) {
|
||||
|
||||
if(agent.name == 'tts') {
|
||||
console.log("agents: agent_status TTS", data)
|
||||
}
|
||||
|
||||
// Update the model name of the client
|
||||
agent.client = data.client;
|
||||
agent.data = data.data;
|
||||
agent.status = data.status;
|
||||
agent.label = data.message;
|
||||
agent.meta = data.meta;
|
||||
agent.actions = {}
|
||||
for(let i in data.data.actions) {
|
||||
agent.actions[i] = {enabled: data.data.actions[i].enabled, config: data.data.actions[i].config};
|
||||
agent.actions[i] = {enabled: data.data.actions[i].enabled, config: data.data.actions[i].config, condition: data.data.actions[i].condition};
|
||||
}
|
||||
agent.enabled = data.data.enabled;
|
||||
|
||||
// sort agents by label
|
||||
|
||||
this.state.agents.sort((a, b) => {
|
||||
if(a.label < b.label) { return -1; }
|
||||
if(a.label > b.label) { return 1; }
|
||||
return 0;
|
||||
});
|
||||
|
||||
} else {
|
||||
// Add the agent to the list of agents
|
||||
let actions = {}
|
||||
for(let i in data.data.actions) {
|
||||
actions[i] = {enabled: data.data.actions[i].enabled, config: data.data.actions[i].config};
|
||||
actions[i] = {enabled: data.data.actions[i].enabled, config: data.data.actions[i].config, condition: data.data.actions[i].condition};
|
||||
}
|
||||
this.state.agents.push({
|
||||
name: data.name,
|
||||
|
@ -157,6 +207,7 @@ export default {
|
|||
label: data.message,
|
||||
actions: actions,
|
||||
enabled: data.data.enabled,
|
||||
meta: data.meta,
|
||||
});
|
||||
console.log("agents: added new agent", this.state.agents[this.state.agents.length - 1], data)
|
||||
}
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
|
||||
</v-card-title>
|
||||
<v-card-text class="scrollable-content">
|
||||
<v-select v-if="agent.data.requires_llm_client" v-model="agent.client" :items="agent.data.client" label="Client" @update:modelValue="save(false)"></v-select>
|
||||
<v-select v-if="agent.data.requires_llm_client" v-model="selectedClient" :items="agent.data.client" label="Client" @update:modelValue="save(false)"></v-select>
|
||||
|
||||
<v-alert type="warning" variant="tonal" density="compact" v-if="agent.data.experimental">
|
||||
This agent is currently experimental and may significantly decrease performance and / or require
|
||||
|
@ -26,6 +26,7 @@
|
|||
</v-alert>
|
||||
|
||||
<v-card v-for="(action, key) in agent.actions" :key="key" density="compact">
|
||||
<div v-if="testActionConditional(action)">
|
||||
<v-card-subtitle>
|
||||
<v-checkbox v-if="!actionAlwaysEnabled(key)" :label="agent.data.actions[key].label" hide-details density="compact" color="green" v-model="action.enabled" @update:modelValue="save(false)"></v-checkbox>
|
||||
</v-card-subtitle>
|
||||
|
@ -47,6 +48,7 @@
|
|||
</div>
|
||||
</div>
|
||||
</v-card-text>
|
||||
</div>
|
||||
</v-card>
|
||||
|
||||
</v-card-text>
|
||||
|
@ -55,6 +57,8 @@
|
|||
</template>
|
||||
|
||||
<script>
|
||||
import {getProperty} from 'dot-prop';
|
||||
|
||||
export default {
|
||||
props: {
|
||||
dialog: Boolean,
|
||||
|
@ -65,6 +69,7 @@ export default {
|
|||
return {
|
||||
saveTimeout: null,
|
||||
localDialog: this.state.dialog,
|
||||
selectedClient: null,
|
||||
agent: { ...this.state.currentAgent }
|
||||
};
|
||||
},
|
||||
|
@ -73,6 +78,9 @@ export default {
|
|||
immediate: true,
|
||||
handler(newVal) {
|
||||
this.localDialog = newVal;
|
||||
if(newVal) {
|
||||
this.selectedClient = typeof(this.agent.client) === 'object' && this.agent.client.client ? this.agent.client.client.value : this.agent.client;
|
||||
}
|
||||
}
|
||||
},
|
||||
'state.currentAgent': {
|
||||
|
@ -93,19 +101,40 @@ export default {
|
|||
return 'Disabled';
|
||||
}
|
||||
},
|
||||
actionAlwaysEnabled(action) {
|
||||
if (action.charAt(0) === '_') {
|
||||
actionAlwaysEnabled(actionName) {
|
||||
if (actionName.charAt(0) === '_') {
|
||||
return true;
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
},
|
||||
|
||||
testActionConditional(action) {
|
||||
if(action.condition == null)
|
||||
return true;
|
||||
|
||||
if(typeof(this.agent.client) !== 'object')
|
||||
return true;
|
||||
|
||||
let value = getProperty(this.agent.actions, action.condition.attribute+".value");
|
||||
return value == action.condition.value;
|
||||
},
|
||||
|
||||
close() {
|
||||
this.$emit('update:dialog', false);
|
||||
},
|
||||
save(delayed = false) {
|
||||
console.log("save", delayed);
|
||||
|
||||
if(this.selectedClient != null) {
|
||||
if(typeof(this.agent.client) === 'object') {
|
||||
if(this.agent.client.client != null)
|
||||
this.agent.client.client.value = this.selectedClient;
|
||||
} else {
|
||||
this.agent.client = this.selectedClient;
|
||||
}
|
||||
}
|
||||
|
||||
if(!delayed) {
|
||||
this.$emit('save', this.agent);
|
||||
return;
|
||||
|
|
|
@ -35,6 +35,11 @@
|
|||
<v-text-field v-model="client.model_name" v-else-if="clientMeta().manual_model" label="Manually specify model name" hint="It looks like we're unable to retrieve the model name automatically. The model name is used to match the appropriate prompt template. This is likely only important if you're locally serving a model."></v-text-field>
|
||||
</v-col>
|
||||
</v-row>
|
||||
<v-row v-for="field in clientMeta().extra_fields" :key="field.name">
|
||||
<v-col cols="12">
|
||||
<v-text-field v-model="client.data[field.name]" v-if="field.type==='text'" :label="field.label" :rules="[rules.required]" :hint="field.description"></v-text-field>
|
||||
</v-col>
|
||||
</v-row>
|
||||
<v-row>
|
||||
<v-col cols="4">
|
||||
<v-text-field v-model="client.max_token_length" v-if="requiresAPIUrl(client)" type="number" label="Context Length" :rules="[rules.required]"></v-text-field>
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
<template>
|
||||
<div v-if="expanded">
|
||||
<v-sheet v-if="expanded" elevation="10">
|
||||
<v-img cover @click="toggle()" v-if="asset_id !== null" :src="'data:'+media_type+';base64, '+base64"></v-img>
|
||||
</div>
|
||||
</v-sheet>
|
||||
<v-list-subheader v-else @click="toggle()"><v-icon>mdi-image-frame</v-icon> Cover image
|
||||
<v-icon v-if="expanded" icon="mdi-chevron-down"></v-icon>
|
||||
<v-icon v-else icon="mdi-chevron-up"></v-icon>
|
||||
|
@ -49,6 +49,11 @@ export default {
|
|||
this.media_type = data.media_type;
|
||||
}
|
||||
}
|
||||
if(data.type === "scene_asset_character_cover_image") {
|
||||
this.asset_id = data.asset_id;
|
||||
this.base64 = data.asset;
|
||||
this.media_type = data.media_type;
|
||||
}
|
||||
},
|
||||
},
|
||||
|
||||
|
|
|
@ -56,7 +56,7 @@ export default {
|
|||
if(newVal != null) {
|
||||
this.requestCoverImages();
|
||||
}
|
||||
}
|
||||
},
|
||||
},
|
||||
methods: {
|
||||
|
||||
|
@ -120,7 +120,6 @@ export default {
|
|||
|
||||
handleMessage(data) {
|
||||
if(data.type === 'assets') {
|
||||
console.log("ASSEsTS", data.assets)
|
||||
for(let id in data.assets) {
|
||||
let asset = data.assets[id];
|
||||
this.coverImages[id] = {
|
||||
|
@ -128,10 +127,12 @@ export default {
|
|||
mediaType: asset.mediaType,
|
||||
};
|
||||
}
|
||||
console.log("assets", this.coverImages, data)
|
||||
}
|
||||
},
|
||||
},
|
||||
mounted() {
|
||||
this.requestCoverImages();
|
||||
},
|
||||
created() {
|
||||
this.registerMessageHandler(this.handleMessage);
|
||||
},
|
||||
|
|
|
@ -7,7 +7,7 @@
|
|||
<v-list-subheader class="text-uppercase" v-else>
|
||||
<v-progress-circular indeterminate="disable-shrink" color="primary" size="20"></v-progress-circular> Waiting for config...
|
||||
</v-list-subheader>
|
||||
<div v-if="!loading && isConnected() && expanded && !configurationRequired() && appConfig !== null">
|
||||
<div v-if="!loading && isConnected() && expanded && sceneLoadingAvailable && appConfig !== null">
|
||||
<v-list-item>
|
||||
<div class="mb-3">
|
||||
<!-- Toggle buttons for switching between file upload and path input -->
|
||||
|
@ -43,7 +43,7 @@
|
|||
</div>
|
||||
</v-list-item>
|
||||
</div>
|
||||
<div v-else-if="configurationRequired()">
|
||||
<div v-else-if="!sceneLoadingAvailable">
|
||||
<v-alert type="warning" variant="tonal">You need to configure a Talemate client before you can load scenes.</v-alert>
|
||||
</div>
|
||||
<DefaultCharacter ref="defaultCharacterModal" @save="loadScene" @cancel="loadCanceled"></DefaultCharacter>
|
||||
|
@ -58,6 +58,9 @@ export default {
|
|||
components: {
|
||||
DefaultCharacter,
|
||||
},
|
||||
props: {
|
||||
sceneLoadingAvailable: Boolean
|
||||
},
|
||||
data() {
|
||||
return {
|
||||
loading: false,
|
||||
|
@ -75,7 +78,7 @@ export default {
|
|||
emits: {
|
||||
loading: null,
|
||||
},
|
||||
inject: ['getWebsocket', 'registerMessageHandler', 'isConnected', 'configurationRequired'],
|
||||
inject: ['getWebsocket', 'registerMessageHandler', 'isConnected'],
|
||||
methods: {
|
||||
// Method to show the DefaultCharacter modal
|
||||
showDefaultCharacterModal() {
|
||||
|
|
|
@ -311,6 +311,30 @@
|
|||
</template>
|
||||
</v-tooltip>
|
||||
|
||||
<!-- visualizer actions -->
|
||||
|
||||
<v-menu>
|
||||
<template v-slot:activator="{ props }">
|
||||
<v-btn class="hotkey mx-3" v-bind="props" :disabled="isInputDisabled() || !visualAgentReady" color="primary" icon>
|
||||
<v-icon>mdi-image-frame</v-icon>
|
||||
</v-btn>
|
||||
</template>
|
||||
<v-list>
|
||||
<v-list-subheader>Visualize</v-list-subheader>
|
||||
<!-- environment -->
|
||||
<v-list-item @click="sendHotButtonMessage('!vis_env')" prepend-icon="mdi-image-filter-hdr">
|
||||
<v-list-item-title>Visualize Environment</v-list-item-title>
|
||||
<v-list-item-subtitle>Generate a background image of the environment</v-list-item-subtitle>
|
||||
</v-list-item>
|
||||
<!-- npcs -->
|
||||
<v-list-item v-for="npc_name in npc_characters" :key="npc_name"
|
||||
@click="sendHotButtonMessage('!vis_char:' + npc_name)" prepend-icon="mdi-brush">
|
||||
<v-list-item-title>Visualize {{ npc_name }}</v-list-item-title>
|
||||
<v-list-item-subtitle>Generate a portrait of {{ npc_name }}</v-list-item-subtitle>
|
||||
</v-list-item>
|
||||
</v-list>
|
||||
</v-menu>
|
||||
|
||||
<!-- save menu -->
|
||||
|
||||
<v-menu>
|
||||
|
@ -371,6 +395,7 @@ export default {
|
|||
sceneHelp: "",
|
||||
sceneExperimental: false,
|
||||
canAutoSave: false,
|
||||
visualAgentReady: false,
|
||||
npc_characters: [],
|
||||
|
||||
quickSettings: [
|
||||
|
@ -669,6 +694,8 @@ export default {
|
|||
}
|
||||
}
|
||||
return;
|
||||
} else if (data.type === 'agent_status' && data.name === 'visual') {
|
||||
this.visualAgentReady = data.status == 'idle' || data.status == 'busy' || data.status == 'busy_bg';
|
||||
} else if (data.type === "quick_settings" && data.action === 'set_done') {
|
||||
return;
|
||||
}
|
||||
|
|
|
@ -11,7 +11,10 @@
|
|||
Make sure the backend process is running.
|
||||
</p>
|
||||
</v-alert>
|
||||
<LoadScene ref="loadScene" @loading="sceneStartedLoading" />
|
||||
<LoadScene
|
||||
ref="loadScene"
|
||||
:scene-loading-available="ready && connected"
|
||||
@loading="sceneStartedLoading" />
|
||||
<v-divider></v-divider>
|
||||
<div :style="(sceneActive && scene.environment === 'scene' ? 'display:block' : 'display:none')">
|
||||
<!-- <GameOptions v-if="sceneActive" ref="gameOptions" /> -->
|
||||
|
@ -25,7 +28,7 @@
|
|||
</v-navigation-drawer>
|
||||
|
||||
<!-- settings navigation drawer -->
|
||||
<v-navigation-drawer v-model="drawer" app location="right">
|
||||
<v-navigation-drawer v-model="drawer" app location="right" width="300">
|
||||
<v-alert v-if="!connected" type="error" variant="tonal">
|
||||
Not connected to Talemate backend
|
||||
<p class="text-body-2" color="white">
|
||||
|
@ -49,7 +52,7 @@
|
|||
</v-navigation-drawer>
|
||||
|
||||
<!-- debug tools navigation drawer -->
|
||||
<v-navigation-drawer v-model="debugDrawer" app location="right">
|
||||
<v-navigation-drawer v-model="debugDrawer" app location="right" width="400">
|
||||
<v-list>
|
||||
<v-list-subheader class="text-uppercase"><v-icon>mdi-bug</v-icon> Debug Tools</v-list-subheader>
|
||||
<DebugTools ref="debugTools"></DebugTools>
|
||||
|
@ -74,7 +77,7 @@
|
|||
<AudioQueue ref="audioQueue" />
|
||||
<v-spacer></v-spacer>
|
||||
<span v-if="version !== null">v{{ version }}</span>
|
||||
<span v-if="configurationRequired()">
|
||||
<span v-if="!ready">
|
||||
<v-icon icon="mdi-application-cog"></v-icon>
|
||||
<span class="ml-1">Configuration required</span>
|
||||
</span>
|
||||
|
@ -104,9 +107,10 @@
|
|||
Talemate
|
||||
</v-toolbar-title>
|
||||
<v-spacer></v-spacer>
|
||||
<VisualQueue ref="visualQueue" />
|
||||
<v-app-bar-nav-icon @click="toggleNavigation('debug')"><v-icon>mdi-bug</v-icon></v-app-bar-nav-icon>
|
||||
<v-app-bar-nav-icon @click="openAppConfig()"><v-icon>mdi-cog</v-icon></v-app-bar-nav-icon>
|
||||
<v-app-bar-nav-icon @click="toggleNavigation('settings')" v-if="configurationRequired()"
|
||||
<v-app-bar-nav-icon @click="toggleNavigation('settings')" v-if="!ready"
|
||||
color="red-darken-1"><v-icon>mdi-application-cog</v-icon></v-app-bar-nav-icon>
|
||||
<v-app-bar-nav-icon @click="toggleNavigation('settings')"
|
||||
v-else><v-icon>mdi-application-cog</v-icon></v-app-bar-nav-icon>
|
||||
|
@ -149,7 +153,7 @@
|
|||
<IntroView v-else
|
||||
@request-scene-load="(path) => { $refs.loadScene.loadJsonSceneFromPath(path); }"
|
||||
:version="version"
|
||||
:scene-loading-available="!configurationRequired() && connected"
|
||||
:scene-loading-available="ready && connected"
|
||||
:config="appConfig" />
|
||||
|
||||
</v-container>
|
||||
|
@ -179,6 +183,7 @@ import AppConfig from './AppConfig.vue';
|
|||
import DebugTools from './DebugTools.vue';
|
||||
import AudioQueue from './AudioQueue.vue';
|
||||
import StatusNotification from './StatusNotification.vue';
|
||||
import VisualQueue from './VisualQueue.vue';
|
||||
|
||||
import IntroView from './IntroView.vue';
|
||||
|
||||
|
@ -200,6 +205,7 @@ export default {
|
|||
AudioQueue,
|
||||
StatusNotification,
|
||||
IntroView,
|
||||
VisualQueue,
|
||||
},
|
||||
name: 'TalemateApp',
|
||||
data() {
|
||||
|
@ -220,6 +226,7 @@ export default {
|
|||
errorMessage: null,
|
||||
errorNotification: false,
|
||||
notificatioonBusy: false,
|
||||
ready: false,
|
||||
inputHint: 'Enter your text...',
|
||||
messageInput: '',
|
||||
reconnectInterval: 3000,
|
||||
|
@ -352,7 +359,8 @@ export default {
|
|||
}
|
||||
|
||||
if (data.type == "client_status" || data.type == "agent_status") {
|
||||
if (this.configurationRequired()) {
|
||||
this.ready = !this.configurationRequired();
|
||||
if (!this.ready) {
|
||||
this.setNavigation('settings');
|
||||
}
|
||||
return;
|
||||
|
@ -558,10 +566,6 @@ export default {
|
|||
</script>
|
||||
|
||||
<style scoped>
|
||||
.message.request_input {
|
||||
|
||||
}
|
||||
|
||||
.backdrop {
|
||||
background-image: url('/src/assets/logo-13.1-backdrop.png');
|
||||
background-repeat: no-repeat;
|
||||
|
|
224
talemate_frontend/src/components/VisualQueue.vue
Normal file
|
@ -0,0 +1,224 @@
|
|||
<template>
|
||||
<v-chip v-if="newImages" color="info" class="text-caption" label transition="scroll-x-reverse-transition">New Images</v-chip>
|
||||
<v-app-bar-nav-icon v-if="images.length > 0" @click="open">
|
||||
<v-icon>mdi-image-multiple-outline</v-icon>
|
||||
<v-icon v-if="newImages" class="btn-notification" color="info">mdi-alert-circle</v-icon>
|
||||
</v-app-bar-nav-icon>
|
||||
|
||||
<v-dialog v-model="dialog" max-width="920" height="920">
|
||||
<v-card>
|
||||
<v-card-title>
|
||||
Visual queue
|
||||
<span v-if="generating">
|
||||
<v-progress-circular class="ml-1 mr-3" size="14" indeterminate="disable-shrink" color="primary">
|
||||
</v-progress-circular>
|
||||
<span class="text-caption text-primary">Generating...</span>
|
||||
</span>
|
||||
</v-card-title>
|
||||
<v-toolbar density="compact" color="grey-darken-4">
|
||||
<v-btn rounded="sm" @click="deleteAll()" prepend-icon="mdi-close-box-outline">Discard All</v-btn>
|
||||
<v-spacer></v-spacer>
|
||||
<span v-if="selectedImage != null">
|
||||
<v-btn :disabled="generating" rounded="sm" @click="regenerateImage()" prepend-icon="mdi-refresh">Regenerate</v-btn>
|
||||
<v-btn rounded="sm" @click="deleteImage()" prepend-icon="mdi-close-box-outline">Discard</v-btn>
|
||||
</span>
|
||||
</v-toolbar>
|
||||
<v-divider></v-divider>
|
||||
<v-card-text>
|
||||
<v-row>
|
||||
<v-col cols="2" class="overflow-content">
|
||||
<v-img v-for="(image, idx) in images" elevation="7" :src="imageSource(image.base64)" :key="idx" @click.stop="selectImage(idx)" class="img-thumb"></v-img>
|
||||
</v-col>
|
||||
<v-col cols="10" class="overflow-content">
|
||||
<v-row v-if="selectedImage != null">
|
||||
<v-col :cols="selectedImage.context.format === 'portrait' ? 7 : 12">
|
||||
<v-img max-height="800" :src="imageSource(selectedImage.base64)" :class="imagePreviewClass()"></v-img>
|
||||
</v-col>
|
||||
<v-col :cols="selectedImage.context.format === 'portrait' ? 5 : 12">
|
||||
<v-card elevation="7" density="compact">
|
||||
<v-card-text>
|
||||
<v-alert density="compact" v-if="selectedImage.context.vis_type" icon="mdi-panorama-variant-outline" variant="text" color="grey">
|
||||
{{ selectedImage.context.vis_type }}
|
||||
</v-alert>
|
||||
<v-alert density="compact" v-if="selectedImage.context.prepared_prompt" icon="mdi-script-text-outline" variant="text" color="grey">
|
||||
<v-row>
|
||||
<v-col :cols="selectedImage.context.format === 'portrait' ? 12 : 4">
|
||||
<v-tooltip :text="selectedImage.context.prompt" class="pre-wrap" max-width="400">
|
||||
<template v-slot:activator="{ props }">
|
||||
<span class="text-underline text-info" v-bind="props">Initial prompt</span>
|
||||
</template>
|
||||
</v-tooltip>
|
||||
</v-col>
|
||||
<v-col :cols="selectedImage.context.format === 'portrait' ? 12 : 4">
|
||||
<v-tooltip :text="selectedImage.context.prepared_prompt" class="pre-wrap" max-width="400">
|
||||
<template v-slot:activator="{ props }">
|
||||
<span class="text-underline text-info" v-bind="props">Prepared prompt</span>
|
||||
</template>
|
||||
</v-tooltip>
|
||||
</v-col>
|
||||
</v-row>
|
||||
</v-alert>
|
||||
<v-alert density="compact" v-if="selectedImage.context.character_name" icon="mdi-account" variant="text" color="grey">
|
||||
{{ selectedImage.context.character_name }}
|
||||
</v-alert>
|
||||
<v-alert density="compact" v-if="selectedImage.context.instructions" icon="mdi-comment-text" variant="text" color="grey">
|
||||
{{ selectedImage.context.instructions }}
|
||||
</v-alert>
|
||||
|
||||
<div v-if="selectedImage.context.vis_type === 'CHARACTER'">
|
||||
<!-- character actions -->
|
||||
<v-btn color="primary" variant="text" prepend-icon="mdi-image-frame" @click.stop="setCharacterCoverImage()">
|
||||
Set as cover image
|
||||
</v-btn>
|
||||
</div>
|
||||
|
||||
</v-card-text>
|
||||
</v-card>
|
||||
|
||||
</v-col>
|
||||
</v-row>
|
||||
</v-col>
|
||||
</v-row>
|
||||
</v-card-text>
|
||||
</v-card>
|
||||
</v-dialog>
|
||||
|
||||
</template>
|
||||
<script>
|
||||
|
||||
|
||||
export default {
|
||||
name: 'VisualQueue',
|
||||
inject: ['requestAssets', 'getWebsocket', 'registerMessageHandler'],
|
||||
data() {
|
||||
return {
|
||||
selectedImage: null,
|
||||
dialog: false,
|
||||
images: [],
|
||||
newImages: false,
|
||||
selectOnGenerate: false,
|
||||
generating: false,
|
||||
}
|
||||
},
|
||||
emits: ["new-image"],
|
||||
methods: {
|
||||
deleteImage() {
|
||||
let index = this.images.indexOf(this.selectedImage);
|
||||
this.images.splice(index, 1);
|
||||
if(this.images.length > 0) {
|
||||
this.selectedImage = this.images[0];
|
||||
} else {
|
||||
this.selectedImage = null;
|
||||
this.dialog = false;
|
||||
}
|
||||
},
|
||||
deleteAll() {
|
||||
this.images = [];
|
||||
this.selectedImage = null;
|
||||
this.dialog = false;
|
||||
|
||||
},
|
||||
setCharacterCoverImage() {
|
||||
this.getWebsocket().send(JSON.stringify({
|
||||
"type": "visual",
|
||||
"action": "cover_image",
|
||||
"base64": "data:image/png;base64,"+this.selectedImage.base64,
|
||||
"context": this.selectedImage.context,
|
||||
}));
|
||||
},
|
||||
regenerateImage() {
|
||||
this.getWebsocket().send(JSON.stringify({
|
||||
"type": "visual",
|
||||
"action": "regenerate",
|
||||
"context": this.selectedImage.context,
|
||||
}));
|
||||
this.selectOnGenerate = true;
|
||||
},
|
||||
imagePreviewClass() {
|
||||
return this.selectedImage.context.format === 'portrait' ? 'img-preview-portrait' : 'img-preview-wide';
|
||||
},
|
||||
selectImage(index) {
|
||||
this.selectedImage = this.images[index];
|
||||
},
|
||||
imageSource(base64) {
|
||||
return "data:image/png;base64,"+base64;
|
||||
},
|
||||
open() {
|
||||
this.dialog = true;
|
||||
this.newImages = false;
|
||||
},
|
||||
handleMessage(message) {
|
||||
if(message.type == "image_generated") {
|
||||
let image = {
|
||||
"base64": message.data.base64,
|
||||
"context": message.data.context,
|
||||
}
|
||||
this.images.unshift(image);
|
||||
this.newImages = true;
|
||||
this.$emit("new-image", image);
|
||||
if(this.selectedImage == null || this.selectOnGenerate) {
|
||||
this.selectedImage = image;
|
||||
this.selectOnGenerate = false;
|
||||
}
|
||||
console.log("Received image", image);
|
||||
} else if(message.type === "agent_status" && message.name === "visual") {
|
||||
this.generating = message.status === "busy_bg" || message.status === "busy";
|
||||
}
|
||||
},
|
||||
},
|
||||
created() {
|
||||
this.registerMessageHandler(this.handleMessage);
|
||||
}
|
||||
}
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
|
||||
.img-thumb {
|
||||
cursor: pointer;
|
||||
margin: 5px;
|
||||
width: 100%;
|
||||
height: auto;
|
||||
}
|
||||
|
||||
.img-preview-portrait {
|
||||
width: 100%;
|
||||
height: auto;
|
||||
margin: 5px;
|
||||
}
|
||||
|
||||
.img-preview-wide {
|
||||
width: 100%;
|
||||
height: auto;
|
||||
margin: 5px;
|
||||
}
|
||||
|
||||
.overflow-content {
|
||||
overflow-y: auto;
|
||||
overflow-x: hidden;
|
||||
min-height: 700px;
|
||||
max-height: 850px;
|
||||
}
|
||||
|
||||
.text-underline {
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
.pre-wrap {
|
||||
white-space: pre-wrap;
|
||||
}
|
||||
|
||||
.btn-notification {
|
||||
position: absolute;
|
||||
top: 0px;
|
||||
right: 0px;
|
||||
font-size: 15px;
|
||||
border-radius: 50%;
|
||||
width: 20px;
|
||||
height: 20px;
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
</style>
|
110
templates/comfyui-workflows/default-sd15.json
Normal file
|
@ -0,0 +1,110 @@
|
|||
{
|
||||
"1": {
|
||||
"inputs": {
|
||||
"ckpt_name": "protovisionXLHighFidelity3D_release0630Bakedvae.safetensors"
|
||||
},
|
||||
"class_type": "CheckpointLoaderSimple",
|
||||
"_meta": {
|
||||
"title": "Talemate Load Checkpoint"
|
||||
}
|
||||
},
|
||||
"3": {
|
||||
"inputs": {
|
||||
"width": 768,
|
||||
"height": 768,
|
||||
"batch_size": 1
|
||||
},
|
||||
"class_type": "EmptyLatentImage",
|
||||
"_meta": {
|
||||
"title": "Talemate Resolution"
|
||||
}
|
||||
},
|
||||
"4": {
|
||||
"inputs": {
|
||||
"text": "a puppy",
|
||||
"clip": [
|
||||
"1",
|
||||
1
|
||||
]
|
||||
},
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {
|
||||
"title": "Talemate Positive Prompt"
|
||||
}
|
||||
},
|
||||
"5": {
|
||||
"inputs": {
|
||||
"text": "",
|
||||
"clip": [
|
||||
"1",
|
||||
1
|
||||
]
|
||||
},
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {
|
||||
"title": "Talemate Negative Prompt"
|
||||
}
|
||||
},
|
||||
"10": {
|
||||
"inputs": {
|
||||
"add_noise": "enable",
|
||||
"noise_seed": 131938123826302,
|
||||
"steps": 50,
|
||||
"cfg": 7,
|
||||
"sampler_name": "dpmpp_2m_sde",
|
||||
"scheduler": "karras",
|
||||
"start_at_step": 0,
|
||||
"end_at_step": 10000,
|
||||
"return_with_leftover_noise": "disable",
|
||||
"model": [
|
||||
"1",
|
||||
0
|
||||
],
|
||||
"positive": [
|
||||
"4",
|
||||
0
|
||||
],
|
||||
"negative": [
|
||||
"5",
|
||||
0
|
||||
],
|
||||
"latent_image": [
|
||||
"3",
|
||||
0
|
||||
]
|
||||
},
|
||||
"class_type": "KSamplerAdvanced",
|
||||
"_meta": {
|
||||
"title": "KSampler (Advanced)"
|
||||
}
|
||||
},
|
||||
"13": {
|
||||
"inputs": {
|
||||
"samples": [
|
||||
"10",
|
||||
0
|
||||
],
|
||||
"vae": [
|
||||
"1",
|
||||
2
|
||||
]
|
||||
},
|
||||
"class_type": "VAEDecode",
|
||||
"_meta": {
|
||||
"title": "VAE Decode"
|
||||
}
|
||||
},
|
||||
"14": {
|
||||
"inputs": {
|
||||
"filename_prefix": "ComfyUI",
|
||||
"images": [
|
||||
"13",
|
||||
0
|
||||
]
|
||||
},
|
||||
"class_type": "SaveImage",
|
||||
"_meta": {
|
||||
"title": "Save Image"
|
||||
}
|
||||
}
|
||||
}
|
110
templates/comfyui-workflows/default-sdxl.json
Normal file
|
@ -0,0 +1,110 @@
|
|||
{
|
||||
"1": {
|
||||
"inputs": {
|
||||
"ckpt_name": "protovisionXLHighFidelity3D_release0630Bakedvae.safetensors"
|
||||
},
|
||||
"class_type": "CheckpointLoaderSimple",
|
||||
"_meta": {
|
||||
"title": "Talemate Load Checkpoint"
|
||||
}
|
||||
},
|
||||
"3": {
|
||||
"inputs": {
|
||||
"width": 1024,
|
||||
"height": 1024,
|
||||
"batch_size": 1
|
||||
},
|
||||
"class_type": "EmptyLatentImage",
|
||||
"_meta": {
|
||||
"title": "Talemate Resolution"
|
||||
}
|
||||
},
|
||||
"4": {
|
||||
"inputs": {
|
||||
"text": "a puppy",
|
||||
"clip": [
|
||||
"1",
|
||||
1
|
||||
]
|
||||
},
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {
|
||||
"title": "Talemate Positive Prompt"
|
||||
}
|
||||
},
|
||||
"5": {
|
||||
"inputs": {
|
||||
"text": "",
|
||||
"clip": [
|
||||
"1",
|
||||
1
|
||||
]
|
||||
},
|
||||
"class_type": "CLIPTextEncode",
|
||||
"_meta": {
|
||||
"title": "Talemate Negative Prompt"
|
||||
}
|
||||
},
|
||||
"10": {
|
||||
"inputs": {
|
||||
"add_noise": "enable",
|
||||
"noise_seed": 131938123826302,
|
||||
"steps": 50,
|
||||
"cfg": 7,
|
||||
"sampler_name": "dpmpp_2m_sde",
|
||||
"scheduler": "karras",
|
||||
"start_at_step": 0,
|
||||
"end_at_step": 10000,
|
||||
"return_with_leftover_noise": "disable",
|
||||
"model": [
|
||||
"1",
|
||||
0
|
||||
],
|
||||
"positive": [
|
||||
"4",
|
||||
0
|
||||
],
|
||||
"negative": [
|
||||
"5",
|
||||
0
|
||||
],
|
||||
"latent_image": [
|
||||
"3",
|
||||
0
|
||||
]
|
||||
},
|
||||
"class_type": "KSamplerAdvanced",
|
||||
"_meta": {
|
||||
"title": "KSampler (Advanced)"
|
||||
}
|
||||
},
|
||||
"13": {
|
||||
"inputs": {
|
||||
"samples": [
|
||||
"10",
|
||||
0
|
||||
],
|
||||
"vae": [
|
||||
"1",
|
||||
2
|
||||
]
|
||||
},
|
||||
"class_type": "VAEDecode",
|
||||
"_meta": {
|
||||
"title": "VAE Decode"
|
||||
}
|
||||
},
|
||||
"14": {
|
||||
"inputs": {
|
||||
"filename_prefix": "ComfyUI",
|
||||
"images": [
|
||||
"13",
|
||||
0
|
||||
]
|
||||
},
|
||||
"class_type": "SaveImage",
|
||||
"_meta": {
|
||||
"title": "Save Image"
|
||||
}
|
||||
}
|
||||
}
|