talemate/README.md
veguAI add4893939
Prep 0.19.0 (#67)
* linting

* improve prompt devtools: test changes, show more information

* some more polish for the new promp devtools

* up default conversation gen length to 128

* openai client tweaks, talemate sets max_tokens on gpt-3.5 generations

* support new openai embeddings (and default to text-embedding-3-small)

* ux polish for character sheet and character state ux

* actor instructions

* experiment using # for context / instructions

* fix bug where regenerating history would mess up time stamps

* remove trailing ]

* prevent client ctx from being unset

* fix issue where sometimes you'd need to delete a client twice for it to disappear

* upgrade dependencies

* set 0.19.0

* fix performance degradation caused by circular loading animation

* remove coqui studio support

* fix issue when switching from unsaved creative mode to loading a scene

* third party client / agent support

* edit dialogue examples through character / actor editor

* remove "edit dialogue" action from editor - replaced by character actor instructions

* different icon for delete

* prompt adjustment for acting instructions

* adhoc context generation for character attributes and details

* add adhoc generation for character description

* contextual generation tweaks

* contextual generation for dialogue examples
fix some formatting issues

* contextual generation for world entries

* prepopulate initial recen scenarios with demo scenes
add experimental holodeck scenario

* scene info
scene experimental

* assortment of fixes for holodeck improvements

* more holodeck fixes

* refactor holodeck instructions

* rename holodeck to simulation suite

* better scene status messages

* add new gpt-3.5-turbo model, better json response coercion for older models

* allow exclusion of characters when persisting based on world state

* better error handling of world state response

* better error handling of world state response

* more simulation suite fixes

* progress color

* world state character name mapping support

* if neither quote nor asterisk is in message default to quotes

* fix rerun of new paraphrase op

* sim suite ping that ensure's characters are not aware of sim

* fixes for better character name assessment
simulation suite can now give the player character a proper name

* fix bug with new status notifications

* sim suite adjustments and fixes and tuning

* sim suite tweaks

* impl scene restore from file

* prompting tweaks for reinforcement messages and acting instructions

* more tweaks

* dialogue prompt tweaks for rerun + rewrite

* fix bug with character entry / exit with narration

* linting

* simsuite screenshots

* screenshots
2024-02-06 00:40:55 +02:00

7.8 KiB

Talemate

Allows you to play roleplay scenarios with large language models.

Screenshot 1 Screenshot 2
Screenshot 3 Screenshot 3
Screenshot 3 Screenshot 4

⚠️ It does not run any large language models itself but relies on existing APIs. Currently supports OpenAI, text-generation-webui and LMStudio. 0.18.0 also adds support for generic OpenAI api implementations, but generation quality on that will vary.

This means you need to either have:

  • an OpenAI api key
  • setup local (or remote via runpod) LLM inference via:
  • Any other OpenAI api implementation that implements the v1/completions endpoint
    • tested llamacpp with the api_like_OAI.py wrapper
    • let me know if you have tested any other implementations and they failed / worked or landed somewhere in between

Current features

  • responive modern ui
  • agents
    • conversation: handles character dialogue
    • narration: handles narrative exposition
    • summarization: handles summarization to compress context while maintain history
    • director: can be used to direct the story / characters
    • editor: improves AI responses (very hit and miss at the moment)
    • world state: generates world snapshot and handles passage of time (objects and characters)
    • creator: character / scenario creator
    • tts: text to speech via elevenlabs, coqui studio, coqui local
  • multi-client support (agents can be connected to separate APIs)
  • long term memory
    • chromadb integration
    • passage of time
  • narrative world state
    • Automatically keep track and reinforce selected character and world truths / states.
  • narrative tools
  • creative tools
    • manage multiple NPCs
    • AI backed character creation with template support (jinja2)
    • AI backed scenario creation
  • context managegement
    • Manage character details and attributes
    • Manage world information / past events
    • Pin important information to the context (Manually or conditionally through AI)
  • runpod integration
  • overridable templates for all prompts. (jinja2)

Planned features

Kinda making it up as i go along, but i want to lean more into gameplay through AI, keeping track of gamestates, moving away from simply roleplaying towards a more game-ified experience.

In no particular order:

  • Extension support
    • modular agents and clients
  • Improved world state
  • Dynamic player choice generation
  • Better creative tools
    • node based scenario / character creation
  • Improved and consistent long term memory and accurate current state of the world
  • Improved director agent
    • Right now this doesn't really work well on anything but GPT-4 (and even there it's debatable). It tends to steer the story in a way that introduces pacing issues. It needs a model that is creative but also reasons really well i think.
  • Gameplay loop governed by AI
    • objectives
    • quests
    • win / lose conditions
  • stable-diffusion client for in place visual generation

Quickstart

Installation

Post here if you run into problems during installation.

There is also a troubleshooting guide that might help.

Windows

  1. Download and install Python 3.10 or Python 3.11 from the official Python website. ⚠️ python3.12 is currently not supported.
  2. Download and install Node.js v20 from the official Node.js website. This will also install npm. ⚠️ v21 is currently not supported.
  3. Download the Talemate project to your local machine. Download from the Releases page.
  4. Unpack the download and run install.bat by double clicking it. This will set up the project on your local machine.
  5. Once the installation is complete, you can start the backend and frontend servers by running start.bat.
  6. Navigate your browser to http://localhost:8080

Linux

python 3.10 or python 3.11 is required. ⚠️ python 3.12 not supported yet.

nodejs v19 or v20 ⚠️ v21 not supported yet.

  1. git clone git@github.com:vegu-ai/talemate
  2. cd talemate
  3. source install.sh
  4. Start the backend: python src/talemate/server/run.py runserver --host 0.0.0.0 --port 5050.
  5. Open a new terminal, navigate to the talemate_frontend directory, and start the frontend server by running npm run serve.

Connecting to an LLM

On the right hand side click the "Add Client" button. If there is no button, you may need to toggle the client options by clicking this button:

Client options

Text-generation-webui

⚠️ As of version 0.13.0 the legacy text-generator-webui API --extension api is no longer supported, please use their new --extension openai api implementation instead.

In the modal if you're planning to connect to text-generation-webui, you can likely leave everything as is and just click Save.

Add client modal

Any of the top models in any of the size classes here should work well (i wouldn't recommend going lower than 7B):

https://www.reddit.com/r/LocalLLaMA/comments/18yp9u4/llm_comparisontest_api_edition_gpt4_vs_gemini_vs/

OpenAI

If you want to add an OpenAI client, just change the client type and select the apropriate model.

Add client modal

If you are setting this up for the first time, you should now see the client, but it will have a red dot next to it, stating that it requires an API key.

OpenAI API Key missing

Click the SET API KEY button. This will open a modal where you can enter your API key.

OpenAI API Key missing

Click Save and after a moment the client should have a green dot next to it, indicating that it is ready to go.

OpenAI API Key set

Ready to go

You will know you are good to go when the client and all the agents have a green dot next to them.

Ready to go

Load the introductory scenario "Infinity Quest"

Generated using talemate creative tools, mostly used for testing / demoing.

You can load it (and any other talemate scenarios or save files) by expanding the "Load" menu in the top left corner and selecting the middle tab. Then simple search for a partial name of the scenario you want to load and click on the result.

Load scenario location

Loading character cards

Supports both v1 and v2 chara specs.

Expand the "Load" menu in the top left corner and either click on "Upload a character card" or simply drag and drop a character card file into the same area.

Load character card location

Once a character is uploaded, talemate may actually take a moment because it needs to convert it to a talemate format and will also run additional LLM prompts to generate character attributes and world state.

Make sure you save the scene after the character is loaded as it can then be loaded as normal talemate scenario in the future.

Further documentation

Please read the documents in the docs folder for more advanced configuration and usage.