mirror of
https://github.com/LostRuins/koboldcpp.git
synced 2025-09-16 03:49:42 +00:00
* llama-server : implement universal assisted decoding * Erase prompt tail for kv-cache * set vocab_dft_compatible in common_speculative * rename ctx_main to ctx_tgt * move vocab_dft_compatible to spec struct * clear mem_dft, remove mem * detokenize id_last for incompatible models * update comment * add --spec-replace flag * accept special tokens when translating between draft/main models * Escape spec-replace * clamp draft result to size to params.n_draft * fix comment * clean up code * restore old example * log common_speculative_are_compatible in speculative example * fix * Update common/speculative.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Update common/speculative.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Update common/speculative.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> |
||
---|---|---|
.. | ||
batched-bench | ||
cvector-generator | ||
export-lora | ||
gguf-split | ||
imatrix | ||
llama-bench | ||
main | ||
mtmd | ||
perplexity | ||
quantize | ||
rpc | ||
run | ||
server | ||
tokenize | ||
tts | ||
CMakeLists.txt |