mirror of
https://github.com/LostRuins/koboldcpp.git
synced 2025-09-05 14:49:06 +00:00
* speculative : fix batch sizes at initialization ggml-ci * speculative : handle params.n_predict == -1 * speculative : limit batch size to llama_n_batch |
||
---|---|---|
.. | ||
CMakeLists.txt | ||
README.md | ||
speculative.cpp |
llama.cpp/examples/speculative
Demonstration of speculative decoding and tree-based speculative decoding techniques
More info: