Commit graph

25 commits

Author SHA1 Message Date
veguAI
113553c306
0.29.0 (#167)
* set 0.29.0

* tweaks for dig layered history (wip)

* move director agent to directory

* relock

* remove "none" from dig_layered_history response

* determine character development

* update character sheet from character development (wip)

* org imports

* alert outdated template overrides during startup

* editor controls normalization of exposition

* dialogue formatting refactor

* fix narrator.clean_result forcing * regardless of editor fix exposition setting

* move more of the dialogue cleanup logic into the editor fix exposition handlers

* remove cruft

* change ot normal selects and add some margin

* move formatting option up

* always strip partial sentences

* separates exposition fixes from other dialogue cleanup operations, since we still want those

* add novel formatting style

* honor formatting config when no markers are supplied

* fix issue where sometimes character message formatting would miss character name

* director can now guide actors through scene analysis

* style fixes

* typo

* select correct system message on direction type

* prompt tweaks

* disable by default

* add support for dynamic instruction injection and include missing guide for internal note usage

* change favicon and also indicate business through favicon

* img

* support xtc, dry and smoothing in text gen webui

* prompt tweaks

* support xtc, dry, smoothing in koboldcpp client

* reorder

* dry, xtc and smoothing factor exposed to tabby api client

* urls to third party API documentation

* remove bos token

* add missing preset

* focal

* focal progress

* focal progress and generated suggestions progress

* fix issue with discard all suggestions

* apply suggestions

* move suggestion ux into the world state manager

* support generation options for suggestion generation

* unused import

* refactor focal to json based approach

* focal and character suggestion tweaks

* rmeove cruft

* remove cruft

* relock

* prompt tweaks

* layout spacing updates

* ux elements for removal of scenes from quick load menu

* context investigation refactor WIP

* context investigation refactor

* context investigation refactor

* context investigation refactor

* cleanup

* move scene analysis to summarizer agent

* remove deprecated context investigation logic

* context investigation refactor continued - split into separate file for easier maint

* allow direct specification of response context length

* context investigation and scene analyzation progress

* change analysis length config to number

* remove old dig-layered-history templates

* summarizer - deep analysis is only available if there is layered history

* move world_state agent to dedicated directory

* remove unused imports

* automatic character progression WIP

* character suggestions progress

* app busy flag based on agent business

* indicate suggestions in world state overview

* fix issue with user input cleanup

* move conversation agent to a dedicated submodule

* Response in action analyze_text_and_extract_context is too short #162

* move narrator agent to its own submodule

* narrator improvements WIP

* narration improvements WIP

* fix issue with regen of character exit narration

* narration improvements WIP

* prompt tweaks

* last_message_of_type can set max iterations

* fix multiline parsing

* prompt tweaks

* director guide actors based of scene analysis

* director guidance for actors

* prompt tweaks

* prompt tweaks

* prompt tweaks

* fix automatic character proposals not propagating to the ux

* fix analysis length

* support director guidance in legacy chat format

* typo

* prompt tweaks

* prompt tweaks

* error handling

* length config

* prompt tweaks

* typo

* remove cruft

* prompt tweak

* prompt tweak

* time passage style changes

* remove cruft

* deep analysis context investigations honor call limit

* refactor conversation agent long term memory to use new memory rag mixin - also streamline prompts

* tweaks to RAG mixin agent config

* fix narration highlighting

* context investgiation fixes
director narration guidance
summarization tweaks

* direactor guide narration progress
context investigation fixes that would cause looping of investigations and failure to dig into the correct layers

* prompt tweaks

* summarization improvements

* separate deep analysis chapter selection from analysis into its own prompt

* character entry and exit

* cache analysis per subtype and some narrator prompt tweaks

* separate layered history logic into its own summarizer mixin and expose some additional options

* scene can now set an overral writing style using writing style templates
narrator option to enable writing style

* narrate query writing style support

* scene tools - narrator actions refactor to handler and own component

* narrator query / look at narrations emitted as context investigation messages
refactor context investigation messaage display
scene message meta data object

* include narrative direction

* improve context investigation message prompt insert

* reorg supported parameters

* fix bug when no message history exists

* WIP make regenerate work nicely with director guidance

* WIP make regenerate work nicely with director guidance

* regenerate conversation fixes

* help text

* ux tweaks

* relock

* turn off deep analysis and context investigations by default

* long term memory options for director and summarizer

* long term memory caching

* fix summarization cache toggle not showing up in ux

* ux tweaks

* layered history summarization includes character information for mentioned characters

* deepseek client added

* Add fork button to narrator message

* analyze and guidance support for time passage narration

* cache based on message fingerprint instead of id

* configurable system prompts WIP

* configurable system prompts WIP

* client overrides for system prompts wired to ux

* system prompt overhaul

* fix issue with unknown system prompt kind

* add button to manually request dynamic choices from the director
move the generate choices logic of the director agent to its own submodule

* remove cruft

* 30 may be too long and is causing the client to disappear temporarly

* suppoert dynamic choice generate for non player characters

* enable `actor` tab for player characters

* creator agent now has access to rag tools
improve acting instruction generation

* client timeout fixes

* fix issue where scene removal menu stayed open after remove

* expose scene restore functionality to ux

* create initial restore point

* fix creator extra-context template

* didn't mean to remove this

* intro scene should be edited through world editor

* fix alert

* fix partial quotes regardless of editor setting
director guidance for conversation reminds to put speech in quotes

* fix @ instructions not being passed through to director guidance prompt

* anthropic mode list updated

* default off

* cohere model list updated

* reset actAs on next scene load

* prompt tweaks

* prompt tweaks

* prompt tweaks

* prompt tweaks

* prompt tweaks

* remove debug cruft

* relock

* docs on changing host / port

* fix issue with narrator / director actiosn not available on fresh install

* fix issue with long content classification determination result

* take this reminder to put speech into quotes out for now, it seems to do more harm than good

* fix some remaining issues with auto expositon fixes

* prompt tweaks

* prompt tweaks

* fix issue during reload

* expensive and warning ux passthrough for agent config

* layered sumamry analysation defaults to on

* what's new info block added

* docs

* what's new updated

* remove old images

* old img cleanup script

* prompt tweaks

* improve auto prompt template detection via huggingface

* add gpt-4o-realtime-preview
add gpt-4o-mini-realtime-preview

* add o1 and o3-mini

* fix o1 and o3

* fix o1 and o3

* more o1 / o3 fixes

* o3 fixes
2025-02-01 17:44:06 +02:00
veguAI
80256012ad
0.28.0 (#148)
* fix issue where saving a new scene would save into a "new scenario" directory instead instead of a relevantly named directory

* implement function to fork new scene file from specific message

* dynamic choice generation

* dynamic choice generation progress

* prompt tweaks

* disable choice generation by default
prompt tweaks

* prompt tweaks for assisted RAG tasks

* allow analyze_text_and_extract_context to include character context

* more prompt tweaks for RAG assist during conversation generation

* open director settings from dynamic action dialog

* adjust wording

* remove player choice message if the trigger message is removed (or regenerated)

* fix issue with dialogue cleaqup where narration over multiple lines would end up being marked incorrectly

* dynamic action generation custom instructions
dynamic action generation narration for sensory actions

* fix actions when acting as another character

* 0.28.0

* conversation agent: split out generation settings, add actor instructions extension, add actor instruction offset slider

* prompt tweaks

* fix ai message regenerate if generated from choice

* cruft

* layered history implementation through summarizer
summarization tweaks

* show layered history in ux

* layered history fixes and tweaks
conversation actor instruction fixes

* more summarization fixes

* fix missing actor instructions

* prompt tweaks

* prompt tweaks

* force lower case when checking sensory type

* agent modal polish
implement find-natural-scene-termination summarizer action
some summarization tweaks

* integrate find_natural_scene_termination with layered history

* collect all denouements at once

* relock

* fix some issues with screenplay type formatting in conversation agent

* cleanup

* revert layered history summarization to use max_process_tokens instead of using ai to fine scene termination as that process falls apart in layer 1 and higher, at that point every item is a scene in itself.

* implement ai assisted digging through layered history to answer queries

* dig_layered_history tweaks and improvements

* prompt tweaks

* adjust budget

* adjust budget for RAG context

* layered_history disabled by default

* prompt tweaks to reinforcement updates

* prompt tweaks

* dig layered history - response without function call to be treated as answer

* clarify style keywords to avoid bleeding into the prompt as subject matter

* fix issue with cover image updates

* fix missing dialogue from context history

* fix issue where new scenes wouldn't load

* fix crash with layered summarization

* more context history fixes

* fix assured dialogue message in context history

* prompt tweaks

* tweaks to layered history generation

* prompt tweaks

* conversation agent can dig layered history for extra context

* some fixes to dig layered history

* scene fork adjust layered history

* layered history status indication

* allow configuration of message styles and colors

* fix issue where layered history generate would get stuck on layer 0

* dig layered history default to false

* prompt tweaks

* context investigation messages

* tweaks to context investigation

* context investigation polish of UX and allow specifying trigger

* prompt tweaks

* allow hiding of ci and director messages

* wire ci shrotcut buttons

* prompt tweaks

* prompt tweaks

* carry on analysis when digging layered history

* improve quality of generate choices by anchoring to last line in the scene

* update hint message

* prompt tweaks

* change default value for max_process_tokens

* docs

* dig layered history only if there are layers

* always enforce num choices limit

* relock

* typos

* prompt tweaks

* docs for forking a scene

* prompt tweaks

* world editor rubber banding fixes follow up

* layered history cleanup fixes

* gracefully handle malformed dig() call

* handle malformed answer() call

* only generate choices if last content isn't player message

* include more context in autocomplete prompts

* prompt tweaks

* typo

* fix issue where inactive characters could not be deleted

* more character delete bugs

* dig layered history fixes

* discard empty content investigations

* fix issue with autocomplete no longer working in world editor

* prompt tweaks

* support single quotes

* prompt tweaks

* fix issue with context investigation if final message was narrator text

* Include the query in the context investigation message

* context investigvations should note when historic events occured

* instructions on how to use internal notes

* time_diff return empty string no time supplied

* prompt tweaks

* fix date calculations for historic entries

* change default values

* prompt tweaks

* fix history regenerate continuing through page reload

* reorganize websocket tasks

* allow cancelling of history regenerate

* Capitalize first letter of summarization

* include base layer in context investigations

* prompt tweaks

* fix issue where context investigations would expand too much of the history at once

* attempt to determine character knowledge during context investigation

* prompt tweaks

* prompt tweaks

* fix mising timestamps

* more context during layer history digging

* fix issue with act-as not being able to select past the first npc if a scene had more than one active npcs in it

* docs

* error handling for malformed answer call

* timestamp calculation fixes and summarization improvements

* lock message manipulation while the ux is busy

* prompt tweaks

* toggling 'log debug messages' will log all messages to console even if no filter is specified

* layered history generation cancellable from ux

* prevent loading scene while another scene is currently loading

* improvements to choice generation prompt and error handling

* prompt tweaks

* prompt tweaks

* prompt tweaks

* fix issue with successive scene load not working

* correctly display timestamps and generated layers during history regen

* summarization improvements

* clean up context investigation prompt

* prompt tweaks

* increase response token size for dig_layered_history

* define missing presets

* missing preset

* prompt tweaks

* fix simulation suite

* attach punkt download to backend start, not frontend start

* dig layered history fixes

* prompt tweaks

* fix summarize_and_pin

* more fixes for time calculations

* relock

* prompt tweaks

* remove dupe entry from layered history

* bash version of update script

* prompt tweaks

* layered history defaults to enabled

* default decreased to 0.3 chance

* fix multi character natural flow selection with clients that don't support LLM coercion

* fix simulation suite call to change a character

* typo

* remove deprecated test

* use python3

* add missing 4o models

* add proper configs for 4o models

* prompt tweaks

* update reinforcement prompt ignores context investigations

* scene.snapshot formatting and dig_layered_history ignores reinforcments

* use end date instead of start date

* Reword 'Moments ago' to 'Recently' as it is more forgiving and applicable to longer time ranges

* fix time calculation issues during summarization of new entries

* no need for scoping

* dont display as range if start and end of entry are identical

* prompt tweaks
2024-11-24 15:43:27 +02:00
veguAI
bb1cf6941b
0.27.0 (#137)
* move memory agent to directory structure

* chromadb settings rework

* memory agent improvements
embedding presets
support switching embeddings without restart
support custom sentence transformer embeddings

* toggle to hide / show disabled clients

* add memory debug tools

* chromadb no longer needs its dedicated config entry

* add missing emits

* fix initial value

* hidden disabled clients no longer cause enumeration issues with client actions

* improve memory agent error handling and hot reloading

* more memory agent error handling

* DEBUG_MEMORY_REQUESTS off

* relock

* sim suite: fix issue with removing or changing characters

* relock

* fix issue where actor dialogue editor would break with multiple characters in the scene

* remove cruft

* implement interrupt function

* margin adjustments

* fix rubber banding issue in world editor when editing certain text fields

* status notification when re-importing vectorb due to embeddings change

* properly open new client context on agent actions

* move jiggle apply to the end of prompt tune stack

* narrator agent length limit and jiggle settings added - also improve post generation cleanup

* progress story prompt improvements

* narrator prompt and cleanup tweaks

* prompt tweak

* revert

* autocomplete dialogue improvements

* Unified process (#141)

* progress to unified process

* --dev arg

* use gunicorn to serve built frontend

* gunicorn config adjustments

* remove dist from gitignore

* revert

* uvicorn instead

* save decode

* graceful shutdown

* refactor unified process

* clean up frontend log messages

* more logging fixes

* 0.27.0

* startup message

* clean up scripts a bit

* fixes to update.bat

* fixes to install.bat

* sim suite supports generation cancellation

* debug

* simplify narrator prompts

* prompt tweaks

* unified docker file

* update docker compose config for unified docker file

* cruft

* fix startup in linux docker

* download punkt so its available

* prompt tweaks

* fix bug when editing scene outline would wipe message history

* add o1 models

* add sampler, scheduler and cfg config to a1111 visualizer

* update installation docs

* visualizer configurable timeout

* memory agent docs

* docs

* relock

* relock

* fix issue where changing embeddings on immutable scene would hang

* remove debug message

* take torch install out of poetry since conditionals don't work.

* torch gets installed through some dependency so put it back into poetry, but reinstall with cuda if cuda support exists

* fix install syntax

* no need for torchvision

* torch cuda install added to linux install script

* add torch cuda install to update.bat

* docs

* docs

* relock

* fix install.sh

* handle torch+cuda install in docker

* docs

* typo
2024-09-23 12:55:34 +03:00
veguAI
95a17197ba
0.26.0 (#133)
* implement manually disabling and enabling clients

* relock

* fix warning spam

* start moving stuff around

* move more stuff

* start separating world state manager into more managable submodules

* character title

* scroll home to top always

* finish separating character state editor into components

* fix defered nav to character sections

* separate components for pin and contextdb managing

* fix issue with context character filter search

* fix world state manage ux state reset issues

* wsm menu refactor
allow updating character image from wsm
cover image layout fixes

* remove debug spam

* fix client deletion / disabling rubber banding issue

* deactivate / activate / delete characters through wsm

* reload character instead

* fix koboldcpp client jiggle arguments

* save scene title

* fix deferred nav

* fix issue where blanking a character detail would bug out

* some layout changes

* character import copies cover image

* remove debug message

* character import via wsm

* deactivate imported characters

* images nav option placeholder

* start move towards new world state templating system

* prompt tweak

* add templates/world-state/*.yaml

* switch to new world state template system in manager

* template editor progress

* more wsm template changes

* template applicator component

* template applicate added to attributes and details

* selective template application

* fix issue with template editing

* attribute and detail templates dont require instructions

* adjust character attributes and details template applicator integration

* add gpt-4o

add gpt-4o-2024-05-13

* autocomplete prompt and postprocessing tweaks

* prompt tweaks

* fix issue where saving a new scene could cause recent config changes to revert

* only download punkt if its not downloaded yet

* working character attribute templates

* character detail generate working
move template generate logic to worldstate.templates

* character creator first steps

* support contextual generate when character doesn't exist

* move talemate wsm templates to their own dir, add supports_spice and supports_style flags

* wsm character creator progress

* character creator progress

* character creator progress and wire up image creation in character editor

* templating progress

* contextual generate generation options

* ux tweaks

* wirte up writing style and spice to generation

* wire spice / writing style to detail generation

* notify when spice is applied

* tweaks to generation spice notifications

* add some help / information to template editor

* fix some issues with detail and attribute generation

* some context gen tweaks

* character gen tweaks

* character color changer

* link to templates form gen option ux

* gen options for dialogue example genrate

* ctrl click to max spice level

* unify spice application notification into a component for reuse

* improvements to example dialogue generation

* some refinements to character editor

* remove some old cruft from scene schema

* wsm scene editor progress

* relock

* relock

* debug message cleanup

* fix issue with tab selection when loading a scene

* scene editor progress

* centralized generation options

* pass generation settings through to character creator

* save changes from wsm view

* scene settings
save copy

* refactor world entry / states editor

* fix issue with applying non-character world state templates

* layout fixes

* allow updating of scene cover image

* move history manager to world editor

* add phi-3 base template

* dialogue cleanup improvements

* refactor scoped game-engine api

* separate legacy creator functions to own file

* remove cruft

* some cleanup and fixes

* add photo style

* remove noisy log message

* better handling of active scene

* some fixes to pin editor

* don't enforce height

* active scene context fixes

* fix intro and scene description generration

* tweak preset for scene direction and summarization tasks

* ensure memory db is open

* update frontend dependencies

* update frontend dependencies

* fix issue with prompt query_memory function returning None

* typo

* default  world state templates

* new scene creation fixes
remove legacy creator ux

* scene export

* fix scene loading from upload

* add claude 3.5 sonnet

* fix automatic client selection when the current client is disabled

* remove cruft

* agent modal extended to support multiple config panels
visual agent prompt prefixes and suffixes addeed

* fix issue with world state template group saving

* resolve attribute name issue `copy`

* RequestInput: fix form validation and keystroke submit

* support chara load from json files also refactor character loading to load.py

* implement simple act-as feature using tab to cycle through active characters in the scene

* docs progress

* tts settings tweaks

* fix issue with loading older talemate scenes

* docs progress

* fix issue with config validation on new installs

* some tweaks for agent setting modals

* default template changed to alpaca

* docs dependencies

* gemma2 template

* nemotron4 template

* docs

* docs

* docs

* change prompt template section to autocomplete

* fix agent config not loading for some agents

* allow deletion of player character

* fix some oddities with scene outline commit

* automatically active player characters and create player characters with the correct actor class

* also set the first npc created as immediately acitve

* add has_active_npcs property and re-emit message history when scene outline is updated.

* indicate when visualizer is busy in the scene tools

* check for busy instead

* prompt tweaks for movie script type dialogue format

* gemma2 prompt fixed

* scene message colors updated

* act as narrator

* move to _old

* scene message appearance tweaks

* fix rubberbanding when editing text field in agent configs

* fix autocompletionm when acting as different character or narrator

* disable autocomplete during command execution

* remove autocomplete button from scene tools

* docs

* relock

* docs

* docs

* improve context pins in dialogue context

* better approximate token count

* fix pin condition editing

* fix issue where scene save as would lose long term memory entries

* immediately clean message history when loading a new scene

* docs

* ensure intro text has formatting markers

* narrator messages written by the player can now be deleted.

* scene editor

* move docs around

* start character editor docs

* more character editor docs:

* fix some ux bugs

* fix template group deletrion not removing the file

* docs

* typos

* docs

* relock

* docs

* notify image generation errors

* linting

* gh pages workflow

* use poetry

* dont use poetry

* link to docs site

* set site_url

* add trailing slash

* fix image paths

* re-add tabbyai link

* fix image generation error triggering incorrectly

* fix intro formatting incosistencies

* remove cruft

* add time passed label to history view

* date adjustments

* tests

* add gpt-4o-mini

* fix links

* remove hard ntlk requirement for voice generation chunking

ntlk error handling

fix typo

* docs

* fix issdue with dupe character card intro text

* disable character forms while templates are being applied.

* failure during context generate no longer locks ux

* refactor client and agent status display in system bar

* llama 3.1 8b claude

* fix format

* adjustments to automcomplete dialogue instructions

* add mistral nemo

* debug info

* fix system agent status getting stuck

* readme

* readme

* fix autocomplete responses when they are framed by quotes
2024-07-26 21:43:06 +03:00
veguAI
39bd02722d
0.25.0 (#100)
* flip title and name in recent scenes

* fix issue where a message could not be regenerated after applying continuity error fixes

* prompt tweaks

* allow json parameters for commands

* autocomplete improvements

* dialogue cleanup fixes

* fix issue with narrate after dialogue and llama3 (and other models that don't have a line break after the user prompt in their prompt template.

* expose ability to auto generate dialogue instructions to wsm character ux

* use b64_json response type

* move tag checks up so they match first

* fix typo

* prompt tweak

* api key support

* prompt tweaks

* editable parameters in prompt debugger / tester

* allow reseting of prompt params

* codemirror for prompt editor

* prompt tweaks

* more prompt debug tool tweaks

* some extra control for `context_history`

* new analytical preset (testing)

* add `join` and `llm_can_be_coerced` to jinja env

* support factual list summaries

* prompt tweaks to continuity check and fix

* new summarization method `facts` exposed to ux

* clamp mistral ai temperature according to their new requirements

* prompt tweaks

* better parsing of fixed dialogue response

* prompt tweaks

* fix intermittent empty meta issue

* history regen status progression and small ux tweaks

* summary entries should always be condensed

* google gemini support

* relock to install google-cloud-aiplatform for vertex ai inference

* fix instruction link

* better error handling of google safety validation and allow disabling of safety validation

* docs

* clarify credentials path requirements

* tweak error line identification

* handle quota limit error

* autocomplete ux wired to assistant plugin instead of command

* autocomplete narrative editing and fixes to autocomplete during dialog edit

* main input autocomplete tweaks

* allow new lines in main input

* 0.25.0 and relock

* fix issue with autocomplete elsewhere locking out main input

* better way to determine remote service

* prompt tweak

* fix rubberbanding issue when editing character attributes

* add open mistral 8x22

* fix continuity error check summary inclusion of target entry

* docs

* default context length to 8192

* linting
2024-05-05 22:16:03 +03:00
veguAI
27eba3bd63
0.22.0 2024-03-29 21:41:45 +02:00
veguAI
abdfb1abbf
WIP: Prep 0.21.0 (#83)
* cleanup

* refactor clean_dialogue

* prompt fixes

* prompt fixes

* conversation format types - movie script and chat (legacy)

* stopping strings updated

* mistral.ai client

* prompt tweaks

* mistral client return token counts

* anthropic client

* archive history emits whole object so we can inspectr time stamps

* show timestamp in history dialog

* openai compat fixes to stop trying to coerce openai url path schema and to never attempt to retrieve the model name automatically, hopefully improving compatibility with the various openai api implementations across the board

* openai compat client let api control prompt template via config option

* fix custom client configs and implement max backscroll

* fix backscroll limit

* remove debug message

* prep 0.21.0

* include model name in prompt template selection label

* use tabs for side nav in app config modal

* readme / docs

* fix issue where "No API key set" could be persisted as the selected model name to the config

* deepinfra example

* linting
2024-03-10 18:03:12 +02:00
veguAI
2f07248211
Prep 0.20.0 (#77)
* fix issue where recent save cover images would sometimes not load

* paraphrase prompt tweaks

* action_to_narration regenerate compatibility fixes

* sim suite add asnwer question instruction

* more sim suite tweaks

* refactor agent details display in agent bar

* visual agent progres (a1111 support)

* visual gen prompt tweaks

* openai compat client pass max_tokens

* world state sequential reinforcement max tokens tightened

* improve item names

* Improve item names

* attempt to remove "changed from.." notes when altering an existing character sheet

* prompt improvements for single character portraits

* visual agent progress

* fix issue where character.update wouldn't update long-term memory

* remove experimental flag for now

* add better instructions for updating existing character sheet

* background processing for agents, visual and tts

* fix selected voice not saving between restarts for elevenlabs

* lessen timeout

* clean up agent status logic

* conditional agent configs

* comfyui support

* visualization queue

* refactor visual styles, comfyui progress

* regen images
auto cover image assign
websocket handler plugin abstraction
agent websocket handler

* automatic1111 fixes
agent status and ready checks

* tweaks to character portrait prompt

* system prompt for visualize

* textgenwebui use temp smoothing on yi models

* comment out api key for now

* fixes issues with openai compat client for retaining api key and auto fixing urls

* update_reinforcment tweaks

* agent status emit from one place

* emit agent status as asyncio task

* remove debug output

* tts add openai support

* openai img gen support

* fix issue with confyui checkbox list not loading

* tts model selection for openai

* narrate_query include character sheet if character is referenced in query
improve visual character portrit generation prompt

* client implementation extra field support and runpod vllm client example

* relock

* fix issue where changing context length would cause next generation to error

* visual agent tweaks and auto gen character cover image in sim suite

* fix issue with readyness lock when there werent any clients defined

* load scene readiness fixes

* linting

* docs

* notes for the runpod vllm example
2024-02-16 13:57:45 +02:00
veguAI
add4893939
Prep 0.19.0 (#67)
* linting

* improve prompt devtools: test changes, show more information

* some more polish for the new promp devtools

* up default conversation gen length to 128

* openai client tweaks, talemate sets max_tokens on gpt-3.5 generations

* support new openai embeddings (and default to text-embedding-3-small)

* ux polish for character sheet and character state ux

* actor instructions

* experiment using # for context / instructions

* fix bug where regenerating history would mess up time stamps

* remove trailing ]

* prevent client ctx from being unset

* fix issue where sometimes you'd need to delete a client twice for it to disappear

* upgrade dependencies

* set 0.19.0

* fix performance degradation caused by circular loading animation

* remove coqui studio support

* fix issue when switching from unsaved creative mode to loading a scene

* third party client / agent support

* edit dialogue examples through character / actor editor

* remove "edit dialogue" action from editor - replaced by character actor instructions

* different icon for delete

* prompt adjustment for acting instructions

* adhoc context generation for character attributes and details

* add adhoc generation for character description

* contextual generation tweaks

* contextual generation for dialogue examples
fix some formatting issues

* contextual generation for world entries

* prepopulate initial recen scenarios with demo scenes
add experimental holodeck scenario

* scene info
scene experimental

* assortment of fixes for holodeck improvements

* more holodeck fixes

* refactor holodeck instructions

* rename holodeck to simulation suite

* better scene status messages

* add new gpt-3.5-turbo model, better json response coercion for older models

* allow exclusion of characters when persisting based on world state

* better error handling of world state response

* better error handling of world state response

* more simulation suite fixes

* progress color

* world state character name mapping support

* if neither quote nor asterisk is in message default to quotes

* fix rerun of new paraphrase op

* sim suite ping that ensure's characters are not aware of sim

* fixes for better character name assessment
simulation suite can now give the player character a proper name

* fix bug with new status notifications

* sim suite adjustments and fixes and tuning

* sim suite tweaks

* impl scene restore from file

* prompting tweaks for reinforcement messages and acting instructions

* more tweaks

* dialogue prompt tweaks for rerun + rewrite

* fix bug with character entry / exit with narration

* linting

* simsuite screenshots

* screenshots
2024-02-06 00:40:55 +02:00
vegu-ai-tools
c5c53c056e readme updates 2024-01-26 13:29:21 +02:00
veguAI
d768713630
Prep 0.17.0 (#48)
* improve windows install script to check for compatible python versions, also work with multi version python installs

* bunch of llm prompt templates

* first gamestate directing impl

* lower similarity threshold when checking for repetition in llm responses

* tweaks to narrate after dialog prompt
tweaks to extract character sheet prompt

* set_context cmd

* Xwin MoE

* thematic generator for randomized content stimuli

* add a memory query to extract character sheet

* direct-scene prompt tweaks

* conversation prompt tweaks

* inline character creation from gameplay instruction template
expose thematic generator to prompt templates

* Mixtral
Synthia-MoE

* display prompt and response side by side

* improve ensure_dialogue_format

* prompt tweaks

* prevent double passive narration in one round
improvements to persist character logic

* SlimOrca
OpenBuddy

* prompt tweaks

* runpod status check wrapped in asyncio

* generate_json_list creator agent action

* limit conversation retries to 2
fix issue where REPETITION signal trigger would get sent with the prompt

* smaller agent tweaks

* thematic generator personality list
thematic generator generate from sets of lists

* adjust tests

* mistral prompt adjustment

* director: update content context

* prompt adjustments

* nous-hermes-2-yi
dolphin-2.2-yo
dolphin-2.6-mixtral

* status messages

* determine character goals
generate json lists

* fix error when chromadb add was called before db was ready (wait until the db is fully initiazed)

* only strip extra spaces off of prompt
textgenwebui: half temperature on -yi- models

* prompt tweaks

* more thematic generators

* direct scene without character should just run the scene instructions if they exist

* as_question_answer for query_scene

* context_history revamp

* Aurora-Nights
MixtgralOrochi
dolphin-2.7-mixtral
nous-hermas-2-solar

* remove old context_history calls

* mv world_state.py to subdir
FlatDolphinMaid
Goliath
Norobara
Nous-Capybara

* world state manager first progress

* context db manager

* fix issue with some clients not remembering context length settings after talemate restart

* Sensualize-Solar

* improve RAG prompt

* conversation agent use [ as a stopping string since the new reinforcement messages use that

* new method for RAG during conversation

* mixtral_11bx2_moe

* option to reset context db from manager ui

* fix context db cleanup if scene is closed without saving

* didnt mean to commit that

* hide internal meta tags

* keep track of manual context entries in scene save file so it can be rebuilt.

* auto save
auto progress
quick settings hotbar options

* manual mode
actor dialogue tools
refactor toolbar

* narrate directed progress
reorganiza narration tools into one cmd module

* 0.17.0

* Mixtral_34Bx2
Sensualize-Mixtral
openchat

* fix save-as action

* fix issue where too little context was joined in via RAG

* context pins implementation

* show active pins in world state component

* pin condition eval and world state agent action config

* Open_Gpt4

* summarization prompt improvements
system prompt for summarization

* guidance prompt for time passage narration

* fix rerun for generic / unhandled messages

* prompt fixes

* summarization methods

* prompt adjustments

* world tools to hot bar
ux tweaks

* bagel-dpo

* context state reinforcements support different insertion methods now (sequential, all context or conversation specific context)

* first progress on world state reinforcement templating

* Kunoichi

* tweaks to update reinforcements prompt

* world state templates progress

* world state templates integration into main ux

* fix issue where openai client wouldn't accept context length override

* dont reconfigure client if no arguments are provided

* pin condition prompt fixes
world state apply template comman label set

* world information / lore entries and reinforcement

* show world entry states reinforcers in ux

* gitignore

* dynamic scenario generation progress

* dynamic scenario experiment

* gitignore

* need to emit world state even if we dont run it during scene init

* summarize and pin action

* poetry relock

* template question / attribute cannot be empty

* fix issue with summarize and pin not respecting selected line

* keep reinforcement messages in history, but keep the same one from stacking up

* narrate query prompt more natural sounding response

* manage pins from world entry editor

* pin_only tag

* ts aware summarize and pin
pin text rendered to context with time label
context reuse session id (this fixes issue of editing context entry and not saving the scene causing removal of context entry next time scene is loaded)

* UX to add character state from template within the worldstate manager UX

* move divider

* handle agent emit error
fix issue with state reinforcer validation

* layout fixes in world state character panel
physical health template added to example config

* fix pin_only undefined error in world entry editor

* laser-dolphin
Noromaid-v0.4-Mixtral-Instruct

* show state templates for world and players in favorite list
fix applying world state template

* refresh world entry list on state creation

* changing a state from non-sequential to sequential should queue it as due

* quicksettings to bar

* fix error during memory db delete

* status messages during scene load

* removing a sequential state reinforcement should remove the reinforcement messages

* Nous-Hermes-2-Mixtral

* fix sync issue when editing character details through contextdb

* immutable save property

* enable director

* update example config

* enable director when loading a scene file that has instructions

* fix more openai client funkyness with context size and losing model

* iq dyn scenario prompt fixes

* delay client save so that dragging the ctx slider doesnt send off a million requests
default openai ctx to 8k

* input disabled while clients active

* declare event

* narrate query prompt tweaks

* fixes to dialogue cleanup that would cause messages after : to be cut off.

* init git repo if not exist

* pull current branch

* add 12 hours as option

* world-state persist deactivated

* install npm packages

* fix typo

* prompt tweaks

* new screenshots and features updated

* update screenshot
2024-01-19 11:47:38 +02:00
vegu-ai-tools
33b043b56d docs 2023-12-11 21:12:34 +02:00
veguAI
611f77a730
Prep 0.16.0 (#40)
* remove dbg message

* more work to make clients and agents modular
allow conversation and narrator to attempt to auto break AI repetition

* application settings refactor
setup third party api keys through application settings

* runpod docs

* fix wording

* docs

* improvements to auto-break-repetition functionality

* more auto-break-repetition improvements

* some cleanup to narrate on dialogue chance calculations

* changing api keys via ux should now reflect to ux instantly.

* memory agent / chromadb agent - wrap blocking functions calls in asyncio

* clean up narrate progression prompt and function

* turn off dedupe debug message for now

* encourage the AI to break repetition as well

* indicate if the current model is missing a LLM prompt template
add prompt template to client modal
fix a bunch of bad vue code

* only show llm prompt when editing client

* OpenHermes-2.5-neural-chat
RpBird-Yi-34B

* fix bug with auto rep break when no repetition was found

* allow giving extra instructions to narrator agent

* emit agents as needed, not constantly

* fix a bunch of vue alerts

* fix request-client-status event

* remove undefined reference

* log client / status emit

* worldstate component track scene time

* Tess
Noromaid

* fix narrate-character prompt context length overflow issues

* disable worldstate refresh button while waiting for response

* history timestamp moved to tooltip off of history button

* fixes #39: using openai embeddings for chromadb tends to error

* adjust conversation again default instructions

* poetry lock

* remove debug message

* chromadb - agent status error if openai embeddings are selected in api key isn't set

* prep 0.16.0
2023-12-08 22:57:44 +02:00
veguAI
76b7b5c0e0
templating overview (#37)
readme updates

readme updates
2023-11-26 16:35:09 +02:00
veguAI
97bfd3a672
Add files via upload 2023-11-26 16:31:49 +02:00
fiwo
496eb469db
Prep 0.14.0 (#34)
* tts agent first progress

* coqui support
voice lists

* orca-2

* tts tweaks

* switch to ux for audio gen

* some tweaks for the new audio queue

* fix error handling if llm fails to create a good world state on initial scene load

* loading creative mode for a new scene will now ask for confirmation if the current scene has unsaved progress

* local tts support

* fix voice list reloading when switching tts api
fix agent config ux to auto save on change, remove save / close buttons

* only do a delayed save on agent config on text input changes

* OrionStar

* dont allow scene loading when llm agents arent correctly configured

* wire summarization to game loop, summarizer agent configs

* fix issues with time passage

* editor fix narrator messages

* 0.14.0

* poetry lock

* requires_llm_client moved to cls property

* add additional config stubs

* tts still load voices even if the agent is disabled

* fix bugf that would keep losing voice selection for tts agent after backend restart

* update tts install requirements

* remove debug output
2023-11-24 22:08:13 +02:00
FInalWombat
bc3f5d63c8
Add files via upload 2023-11-12 15:42:07 +02:00
FInalWombat
e6b21789d1
Prep 0.11.0 (#19)
* dolphin mistral template

* removate trailing \n before attaching the model response

* improve prompt and validator for generated human age

* fix issue where errors during character creation process would not be
communicated to the ux and the character creator would appear stuck

* add dolphin mistral to list

* add talemate_env

* poetry relock

* add json schema for talemate scene files

* fix issues with pydantic after version upgrade

* add json extrac util functions

* fix pydantic model

* use extract json function

* scene generator, better scene name prompt

* OpenHermes-2-Mistral

* alpaca base template
Amethyst 20B template

* character description is no longer part of the sheet and needs to be added separately

* fix pydantic validation

* fix issue where sometimes partial emote strings were kept at the end of dialogue

* no need to commit character name to memory

* dedupe prompts

* clean up extra linebreaks in prompts

* experimental editor agent
agent signals first progress

* take out hardcoded example

* amethyst llm prompt template

* editor agent disableable
agent edit modal tweaks

* world state agent
agent action config schema

* director agent disableable
remove automatic actions config from ux (deprecated)

* fix responsive update when toggling enable on or off in agent dialog

* prompt adjustments
fix divine intellect preset (mirostat values were way off)
fix world state regenerating every turn regardless of setting

* move templates for world state from summarizer to worldstate agent

* conversation agent generation lenght setting

* conversation agent jiggle attribute (randomize offset to certain inference parameters)

* relabel

* scene cover image set to cover as much space as it can

* add character sheet to dialogue example generate prompt

* character creator agent mixin use set_processing

* add <|im_end|> to stopping strings

* add random number gen to template functions

* SynthIA and Tiefighter

* create new persisted characters ouf of world state
natural flow option for conversation agent to help guide multi character conversations

* conversation agent natural flow improvements

* fix bug with 1h time passage option

* some templates

* poetry relock

* fix config validation

* fix issues when detemrining scene history context length to stay within budget

* fixes to world state json parsing
fixes to conversation context length

* remove unused import

* update windows install scripts

* zephyr

* </s> stopping string

* dialog cleanup utils improved

* add agents and clients key to the config example
2023-10-28 11:33:51 +03:00
FInalWombat
4c15ca5290
Update linux-install.md 2023-10-15 16:09:41 +03:00
FInalWombat
73240b5791
Prep 0.10.0 (#12)
* track time passage in scene using iso 8601 format

* chromadb openai instructions

model recommendations updated

* time context passed to long term memory

* add some pre-established history for testing purposes

* time passage

analyze dialogue to template

query_text template function

analyze text and answer question summarizer function

llm prompt template adjustments

iso8601 time utils

chromadb docs adjustments

* didnt mean to remove this

* fix ClientContext stacking

* conversation cleanup tweaks

* prompt prepared response padding

* fix some bugs causing conversation lines containing : to be terminated
early

* fixes issue with chara importing dialoge examples as huge blob instea of
splitting into lines

dialogue example in conversation template randomized

* llm prompt template for Speechless-Llama2-Hermes-Orca-Platypus-WizardLM

* version to 0.10.0
2023-10-02 01:38:02 +03:00
FInalWombat
c0173523f5
Update chromadb.md 2023-09-18 11:20:04 +03:00
FInalWombat
23f26b75da
Update chromadb.md 2023-09-18 11:18:52 +03:00
FInalWombat
d396e9b1f5
Update chromadb.md 2023-09-18 11:18:29 +03:00
FinalWombat
b2d7adc40e add client instructions 2023-09-17 17:11:58 +03:00
FinalWombat
6d93b041c5 initial commit 2023-09-17 16:46:42 +03:00