Commit graph

5 commits

Author SHA1 Message Date
veguAI
80256012ad
0.28.0 (#148)
* fix issue where saving a new scene would save into a "new scenario" directory instead instead of a relevantly named directory

* implement function to fork new scene file from specific message

* dynamic choice generation

* dynamic choice generation progress

* prompt tweaks

* disable choice generation by default
prompt tweaks

* prompt tweaks for assisted RAG tasks

* allow analyze_text_and_extract_context to include character context

* more prompt tweaks for RAG assist during conversation generation

* open director settings from dynamic action dialog

* adjust wording

* remove player choice message if the trigger message is removed (or regenerated)

* fix issue with dialogue cleaqup where narration over multiple lines would end up being marked incorrectly

* dynamic action generation custom instructions
dynamic action generation narration for sensory actions

* fix actions when acting as another character

* 0.28.0

* conversation agent: split out generation settings, add actor instructions extension, add actor instruction offset slider

* prompt tweaks

* fix ai message regenerate if generated from choice

* cruft

* layered history implementation through summarizer
summarization tweaks

* show layered history in ux

* layered history fixes and tweaks
conversation actor instruction fixes

* more summarization fixes

* fix missing actor instructions

* prompt tweaks

* prompt tweaks

* force lower case when checking sensory type

* agent modal polish
implement find-natural-scene-termination summarizer action
some summarization tweaks

* integrate find_natural_scene_termination with layered history

* collect all denouements at once

* relock

* fix some issues with screenplay type formatting in conversation agent

* cleanup

* revert layered history summarization to use max_process_tokens instead of using ai to fine scene termination as that process falls apart in layer 1 and higher, at that point every item is a scene in itself.

* implement ai assisted digging through layered history to answer queries

* dig_layered_history tweaks and improvements

* prompt tweaks

* adjust budget

* adjust budget for RAG context

* layered_history disabled by default

* prompt tweaks to reinforcement updates

* prompt tweaks

* dig layered history - response without function call to be treated as answer

* clarify style keywords to avoid bleeding into the prompt as subject matter

* fix issue with cover image updates

* fix missing dialogue from context history

* fix issue where new scenes wouldn't load

* fix crash with layered summarization

* more context history fixes

* fix assured dialogue message in context history

* prompt tweaks

* tweaks to layered history generation

* prompt tweaks

* conversation agent can dig layered history for extra context

* some fixes to dig layered history

* scene fork adjust layered history

* layered history status indication

* allow configuration of message styles and colors

* fix issue where layered history generate would get stuck on layer 0

* dig layered history default to false

* prompt tweaks

* context investigation messages

* tweaks to context investigation

* context investigation polish of UX and allow specifying trigger

* prompt tweaks

* allow hiding of ci and director messages

* wire ci shrotcut buttons

* prompt tweaks

* prompt tweaks

* carry on analysis when digging layered history

* improve quality of generate choices by anchoring to last line in the scene

* update hint message

* prompt tweaks

* change default value for max_process_tokens

* docs

* dig layered history only if there are layers

* always enforce num choices limit

* relock

* typos

* prompt tweaks

* docs for forking a scene

* prompt tweaks

* world editor rubber banding fixes follow up

* layered history cleanup fixes

* gracefully handle malformed dig() call

* handle malformed answer() call

* only generate choices if last content isn't player message

* include more context in autocomplete prompts

* prompt tweaks

* typo

* fix issue where inactive characters could not be deleted

* more character delete bugs

* dig layered history fixes

* discard empty content investigations

* fix issue with autocomplete no longer working in world editor

* prompt tweaks

* support single quotes

* prompt tweaks

* fix issue with context investigation if final message was narrator text

* Include the query in the context investigation message

* context investigvations should note when historic events occured

* instructions on how to use internal notes

* time_diff return empty string no time supplied

* prompt tweaks

* fix date calculations for historic entries

* change default values

* prompt tweaks

* fix history regenerate continuing through page reload

* reorganize websocket tasks

* allow cancelling of history regenerate

* Capitalize first letter of summarization

* include base layer in context investigations

* prompt tweaks

* fix issue where context investigations would expand too much of the history at once

* attempt to determine character knowledge during context investigation

* prompt tweaks

* prompt tweaks

* fix mising timestamps

* more context during layer history digging

* fix issue with act-as not being able to select past the first npc if a scene had more than one active npcs in it

* docs

* error handling for malformed answer call

* timestamp calculation fixes and summarization improvements

* lock message manipulation while the ux is busy

* prompt tweaks

* toggling 'log debug messages' will log all messages to console even if no filter is specified

* layered history generation cancellable from ux

* prevent loading scene while another scene is currently loading

* improvements to choice generation prompt and error handling

* prompt tweaks

* prompt tweaks

* prompt tweaks

* fix issue with successive scene load not working

* correctly display timestamps and generated layers during history regen

* summarization improvements

* clean up context investigation prompt

* prompt tweaks

* increase response token size for dig_layered_history

* define missing presets

* missing preset

* prompt tweaks

* fix simulation suite

* attach punkt download to backend start, not frontend start

* dig layered history fixes

* prompt tweaks

* fix summarize_and_pin

* more fixes for time calculations

* relock

* prompt tweaks

* remove dupe entry from layered history

* bash version of update script

* prompt tweaks

* layered history defaults to enabled

* default decreased to 0.3 chance

* fix multi character natural flow selection with clients that don't support LLM coercion

* fix simulation suite call to change a character

* typo

* remove deprecated test

* use python3

* add missing 4o models

* add proper configs for 4o models

* prompt tweaks

* update reinforcement prompt ignores context investigations

* scene.snapshot formatting and dig_layered_history ignores reinforcments

* use end date instead of start date

* Reword 'Moments ago' to 'Recently' as it is more forgiving and applicable to longer time ranges

* fix time calculation issues during summarization of new entries

* no need for scoping

* dont display as range if start and end of entry are identical

* prompt tweaks
2024-11-24 15:43:27 +02:00
veguAI
95a17197ba
0.26.0 (#133)
* implement manually disabling and enabling clients

* relock

* fix warning spam

* start moving stuff around

* move more stuff

* start separating world state manager into more managable submodules

* character title

* scroll home to top always

* finish separating character state editor into components

* fix defered nav to character sections

* separate components for pin and contextdb managing

* fix issue with context character filter search

* fix world state manage ux state reset issues

* wsm menu refactor
allow updating character image from wsm
cover image layout fixes

* remove debug spam

* fix client deletion / disabling rubber banding issue

* deactivate / activate / delete characters through wsm

* reload character instead

* fix koboldcpp client jiggle arguments

* save scene title

* fix deferred nav

* fix issue where blanking a character detail would bug out

* some layout changes

* character import copies cover image

* remove debug message

* character import via wsm

* deactivate imported characters

* images nav option placeholder

* start move towards new world state templating system

* prompt tweak

* add templates/world-state/*.yaml

* switch to new world state template system in manager

* template editor progress

* more wsm template changes

* template applicator component

* template applicate added to attributes and details

* selective template application

* fix issue with template editing

* attribute and detail templates dont require instructions

* adjust character attributes and details template applicator integration

* add gpt-4o

add gpt-4o-2024-05-13

* autocomplete prompt and postprocessing tweaks

* prompt tweaks

* fix issue where saving a new scene could cause recent config changes to revert

* only download punkt if its not downloaded yet

* working character attribute templates

* character detail generate working
move template generate logic to worldstate.templates

* character creator first steps

* support contextual generate when character doesn't exist

* move talemate wsm templates to their own dir, add supports_spice and supports_style flags

* wsm character creator progress

* character creator progress

* character creator progress and wire up image creation in character editor

* templating progress

* contextual generate generation options

* ux tweaks

* wirte up writing style and spice to generation

* wire spice / writing style to detail generation

* notify when spice is applied

* tweaks to generation spice notifications

* add some help / information to template editor

* fix some issues with detail and attribute generation

* some context gen tweaks

* character gen tweaks

* character color changer

* link to templates form gen option ux

* gen options for dialogue example genrate

* ctrl click to max spice level

* unify spice application notification into a component for reuse

* improvements to example dialogue generation

* some refinements to character editor

* remove some old cruft from scene schema

* wsm scene editor progress

* relock

* relock

* debug message cleanup

* fix issue with tab selection when loading a scene

* scene editor progress

* centralized generation options

* pass generation settings through to character creator

* save changes from wsm view

* scene settings
save copy

* refactor world entry / states editor

* fix issue with applying non-character world state templates

* layout fixes

* allow updating of scene cover image

* move history manager to world editor

* add phi-3 base template

* dialogue cleanup improvements

* refactor scoped game-engine api

* separate legacy creator functions to own file

* remove cruft

* some cleanup and fixes

* add photo style

* remove noisy log message

* better handling of active scene

* some fixes to pin editor

* don't enforce height

* active scene context fixes

* fix intro and scene description generration

* tweak preset for scene direction and summarization tasks

* ensure memory db is open

* update frontend dependencies

* update frontend dependencies

* fix issue with prompt query_memory function returning None

* typo

* default  world state templates

* new scene creation fixes
remove legacy creator ux

* scene export

* fix scene loading from upload

* add claude 3.5 sonnet

* fix automatic client selection when the current client is disabled

* remove cruft

* agent modal extended to support multiple config panels
visual agent prompt prefixes and suffixes addeed

* fix issue with world state template group saving

* resolve attribute name issue `copy`

* RequestInput: fix form validation and keystroke submit

* support chara load from json files also refactor character loading to load.py

* implement simple act-as feature using tab to cycle through active characters in the scene

* docs progress

* tts settings tweaks

* fix issue with loading older talemate scenes

* docs progress

* fix issue with config validation on new installs

* some tweaks for agent setting modals

* default template changed to alpaca

* docs dependencies

* gemma2 template

* nemotron4 template

* docs

* docs

* docs

* change prompt template section to autocomplete

* fix agent config not loading for some agents

* allow deletion of player character

* fix some oddities with scene outline commit

* automatically active player characters and create player characters with the correct actor class

* also set the first npc created as immediately acitve

* add has_active_npcs property and re-emit message history when scene outline is updated.

* indicate when visualizer is busy in the scene tools

* check for busy instead

* prompt tweaks for movie script type dialogue format

* gemma2 prompt fixed

* scene message colors updated

* act as narrator

* move to _old

* scene message appearance tweaks

* fix rubberbanding when editing text field in agent configs

* fix autocompletionm when acting as different character or narrator

* disable autocomplete during command execution

* remove autocomplete button from scene tools

* docs

* relock

* docs

* docs

* improve context pins in dialogue context

* better approximate token count

* fix pin condition editing

* fix issue where scene save as would lose long term memory entries

* immediately clean message history when loading a new scene

* docs

* ensure intro text has formatting markers

* narrator messages written by the player can now be deleted.

* scene editor

* move docs around

* start character editor docs

* more character editor docs:

* fix some ux bugs

* fix template group deletrion not removing the file

* docs

* typos

* docs

* relock

* docs

* notify image generation errors

* linting

* gh pages workflow

* use poetry

* dont use poetry

* link to docs site

* set site_url

* add trailing slash

* fix image paths

* re-add tabbyai link

* fix image generation error triggering incorrectly

* fix intro formatting incosistencies

* remove cruft

* add time passed label to history view

* date adjustments

* tests

* add gpt-4o-mini

* fix links

* remove hard ntlk requirement for voice generation chunking

ntlk error handling

fix typo

* docs

* fix issdue with dupe character card intro text

* disable character forms while templates are being applied.

* failure during context generate no longer locks ux

* refactor client and agent status display in system bar

* llama 3.1 8b claude

* fix format

* adjustments to automcomplete dialogue instructions

* add mistral nemo

* debug info

* fix system agent status getting stuck

* readme

* readme

* fix autocomplete responses when they are framed by quotes
2024-07-26 21:43:06 +03:00
veguAI
ba64050eab
0.22.0 (#89)
* linux dev instance shortcuts

* add voice samples to gitignore

* direction mode: inner monologue

* actor direction fixes

* py script support for scene logic

* fix end_simulation call

* port sim suite logic to python

* remove dupe log

* fix typing

* section off the text

* fix end simulation command

* simulation goal, prompt tweaks

* prompt tweaks

* dialogue format improvements

* director action logged with message

* call director action log and other fixes

* generate character dialogue instructions, prompt fixes, director action ux

* fix question / answer call

* generate dialogue instructions when loading from character cards

* more dialogue format improvements

* set scene content context more reliably.

* fix innermonologue perspective

* conversation prompt should honor the client's decensor setting

* fix comfyui checkpoint list not loading

* more dialogue format fixes

* prompt tweaks

* fix sim suite group characters, prompt fixes

* npm relock

* handle inanimate objects, handle player name change issues

* don't rename details if the original name was "You"

* As the conversation goes on, dialogue instructions should be moved backwards further to have a weaker effect on immediate generations.

* add more context to character creation prompt

* fix select next talking actor when natural language flow is turned on and the LLM returns multiple character names

* prompt fixes for dialogue generation

* summarization fixes

* default to script format

* seperate dialogue prompt by formatting style, tweak conversation system prompt

* remove cruft

* add gen format to agent details

* relock

* relock

* prep 0.22.0

* add claude-3-haiku-20240307

* readme
2024-03-29 21:37:28 +02:00
FInalWombat
72202dee02
Prep 0.12.0 (#26)
* no " or * just treat as spoken words

* chromadb perist to db

* collect name should contain embedding so switching between chromadb configurations doesn't brick your scenes

* fix save-as long term memory transfer

* add chroma

* director agent refactor

* tweak director command, prompt reset, ux display

* tweak director message ux

* allow clearing of prompt log

* remove auto adding of quotes if neither quote or * are present

* command to reset long term memory for the scene

* improve summarization template as it would cause some llms to add extra details

* rebuilding history will now also rebuild long term memory

* direct scene template

* fix scene time reset

* dialogue template tweaks

* better dialog format fixing

* some dialogue template adjustments

* adjust default values of director agent

* keep track of scene saved/unsaved status and confirm loading a different scene if current scene is unsaved

* prompt fixes

* remove the collection on recommitting the seen to memory, as the embeddings may have changed

* change to the official python api for the openai client and make it async

* prompt tweaks

* world state prompt parsing fixes

* improve handling of json responses

* 0 seconds ago changed to moments ago

* move memory context closer to scene

* token counts for openai client

* narrator agent option: narrate passage of time

* gitignore

* remove memory id

* refactor world state with persistence to chromadb (wip)

* remove world state update instructions

* dont display blank emotion in world state

* openai gpt-4 turbo support

* conversation agent extra instructions

* track prompt response times

* Yi and UtopiaXL

* long term memory retrieval improvements during conversations

* narrate scene tweaks

* conversation ltm augment tweaks

* hide subconfig if parent config isnt enabled

* ai assisted memory recall during conversation default to off

* openai json_object coersion only on model that supports it

openai client emit prompt processing time

* 0.12.0

* remove prompt number from prompt debug list

* add prompt number back in but shift it to the upper row

* narrate time passage hard content limit restriction for now as gpt-4
would just write a whole chapter.

* relock
2023-11-10 22:45:50 +02:00
FinalWombat
6d93b041c5 initial commit 2023-09-17 16:46:42 +03:00