mirror of
https://github.com/LostRuins/koboldcpp.git
synced 2025-09-16 11:59:42 +00:00
* llama-server : implement universal assisted decoding * Erase prompt tail for kv-cache * set vocab_dft_compatible in common_speculative * rename ctx_main to ctx_tgt * move vocab_dft_compatible to spec struct * clear mem_dft, remove mem * detokenize id_last for incompatible models * update comment * add --spec-replace flag * accept special tokens when translating between draft/main models * Escape spec-replace * clamp draft result to size to params.n_draft * fix comment * clean up code * restore old example * log common_speculative_are_compatible in speculative example * fix * Update common/speculative.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Update common/speculative.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Update common/speculative.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> |
||
---|---|---|
.. | ||
arg.cpp | ||
arg.h | ||
base64.hpp | ||
build-info.cpp.in | ||
chat-parser.cpp | ||
chat-parser.h | ||
chat.cpp | ||
chat.h | ||
CMakeLists.txt | ||
common.cpp | ||
common.h | ||
console.cpp | ||
console.h | ||
json-partial.cpp | ||
json-partial.h | ||
json-schema-to-grammar.cpp | ||
json-schema-to-grammar.h | ||
llguidance.cpp | ||
log.cpp | ||
log.h | ||
ngram-cache.cpp | ||
ngram-cache.h | ||
regex-partial.cpp | ||
regex-partial.h | ||
sampling.cpp | ||
sampling.h | ||
speculative.cpp | ||
speculative.h |