koboldcpp/tools
smugman-dot aaf4a4d5e0
webui: add option for LLM title generation (#22265)
* webui: add LLM title generation option

* webui: use chat_template_kwargs for title gen + fix conversation check

* webui: capture firstUserMessage before async streamChatCompletion to fix race condition

* webui: extract LLM title generation into separate method

* webui: use constants and ChatService for LLM generated titles

* webui: rebuild static output

* webui: add LLM title generation setting to new settings location

* webui: use sendMessage in generateTitle

* webui: rebuild static output

* webui: fix formatting

* webui: configurable title prompt, remove think tag regexes, fix TS error

* webui: group title constants into TITLE object, use TruncatedText for CSS truncation and fix race condition

* webui: rebuild static output
2026-05-07 21:14:03 +02:00
..
batched-bench libs : rename libcommon -> libllama-common (#21936) 2026-04-17 11:11:46 +03:00
cli docs : update speculative decoding parameters after refactor (#22397) (#22539) 2026-05-04 08:52:07 +03:00
completion docs : update speculative decoding parameters after refactor (#22397) (#22539) 2026-05-04 08:52:07 +03:00
cvector-generator libs : rename libcommon -> libllama-common (#21936) 2026-04-17 11:11:46 +03:00
export-lora libs : rename libcommon -> libllama-common (#21936) 2026-04-17 11:11:46 +03:00
fit-params fit-params : refactor + add option to output estimated memory per device (#22171) 2026-04-21 09:54:36 +03:00
gguf-split libs : rename libcommon -> libllama-common (#21936) 2026-04-17 11:11:46 +03:00
imatrix libs : rename libcommon -> libllama-common (#21936) 2026-04-17 11:11:46 +03:00
llama-bench spec : refactor params (#22397) 2026-04-28 09:07:33 +03:00
mtmd mtmd: fix whisper audio tail truncation by exposing padded buffer to FFT (#22770) 2026-05-07 14:01:01 +02:00
parser libs : rename libcommon -> libllama-common (#21936) 2026-04-17 11:11:46 +03:00
perplexity fit-params : refactor + add option to output estimated memory per device (#22171) 2026-04-21 09:54:36 +03:00
quantize libs : rename libcommon -> libllama-common (#21936) 2026-04-17 11:11:46 +03:00
results libs : rename libcommon -> libllama-common (#21936) 2026-04-17 11:11:46 +03:00
rpc fix: rpc-server cache may not work in Windows environments (#22394) 2026-04-27 17:25:09 +03:00
server webui: add option for LLM title generation (#22265) 2026-05-07 21:14:03 +02:00
tokenize libs : rename libcommon -> libllama-common (#21936) 2026-04-17 11:11:46 +03:00
tts libs : rename libcommon -> libllama-common (#21936) 2026-04-17 11:11:46 +03:00
CMakeLists.txt llama: end-to-end tests (#19802) 2026-03-08 12:30:21 +01:00