talemate/docs/getting-started/installation/linux.md
veguAI bb1cf6941b
0.27.0 (#137)
* move memory agent to directory structure

* chromadb settings rework

* memory agent improvements
embedding presets
support switching embeddings without restart
support custom sentence transformer embeddings

* toggle to hide / show disabled clients

* add memory debug tools

* chromadb no longer needs its dedicated config entry

* add missing emits

* fix initial value

* hidden disabled clients no longer cause enumeration issues with client actions

* improve memory agent error handling and hot reloading

* more memory agent error handling

* DEBUG_MEMORY_REQUESTS off

* relock

* sim suite: fix issue with removing or changing characters

* relock

* fix issue where actor dialogue editor would break with multiple characters in the scene

* remove cruft

* implement interrupt function

* margin adjustments

* fix rubber banding issue in world editor when editing certain text fields

* status notification when re-importing vectorb due to embeddings change

* properly open new client context on agent actions

* move jiggle apply to the end of prompt tune stack

* narrator agent length limit and jiggle settings added - also improve post generation cleanup

* progress story prompt improvements

* narrator prompt and cleanup tweaks

* prompt tweak

* revert

* autocomplete dialogue improvements

* Unified process (#141)

* progress to unified process

* --dev arg

* use gunicorn to serve built frontend

* gunicorn config adjustments

* remove dist from gitignore

* revert

* uvicorn instead

* save decode

* graceful shutdown

* refactor unified process

* clean up frontend log messages

* more logging fixes

* 0.27.0

* startup message

* clean up scripts a bit

* fixes to update.bat

* fixes to install.bat

* sim suite supports generation cancellation

* debug

* simplify narrator prompts

* prompt tweaks

* unified docker file

* update docker compose config for unified docker file

* cruft

* fix startup in linux docker

* download punkt so its available

* prompt tweaks

* fix bug when editing scene outline would wipe message history

* add o1 models

* add sampler, scheduler and cfg config to a1111 visualizer

* update installation docs

* visualizer configurable timeout

* memory agent docs

* docs

* relock

* relock

* fix issue where changing embeddings on immutable scene would hang

* remove debug message

* take torch install out of poetry since conditionals don't work.

* torch gets installed through some dependency so put it back into poetry, but reinstall with cuda if cuda support exists

* fix install syntax

* no need for torchvision

* torch cuda install added to linux install script

* add torch cuda install to update.bat

* docs

* docs

* relock

* fix install.sh

* handle torch+cuda install in docker

* docs

* typo
2024-09-23 12:55:34 +03:00

1.7 KiB

Quick install instructions

!!! warning python 3.12 is currently not supported.

Dependencies

  1. node.js and npm - see instructions here
  2. python 3.10 or 3.11 - see instructions here

Installation

  1. git clone https://github.com/vegu-ai/talemate.git
  2. cd talemate
  3. source install.sh
    • When asked if you want to install pytorch with CUDA support choose y if you have a CUDA compatible Nvidia GPU and have installed the necessary drivers.
  4. source start.sh

If everything went well, you can proceed to connect a client.

Additional Information

Setting Up a Virtual Environment

  1. Open a terminal.
  2. Navigate to the project directory.
  3. Create a virtual environment by running python3 -m venv talemate_env.
  4. Activate the virtual environment by running source talemate_env/bin/activate.

Installing Dependencies

  1. With the virtual environment activated, install poetry by running pip install poetry.
  2. Use poetry to install dependencies by running poetry install.

Running the Backend

  1. With the virtual environment activated and dependencies installed, you can start the backend server.
  2. Navigate to the src/talemate/server directory.
  3. Run the server with python run.py runserver --host 0.0.0.0 --port 5050.

Running the Frontend

  1. Navigate to the talemate_frontend directory.
  2. If you haven't already, install npm dependencies by running npm install.
  3. Start the server with npm run serve.

Please note that you may need to set environment variables or modify the host and port as per your setup. You can refer to the runserver.sh and frontend.sh files for more details.