koboldcpp/src
Georgi Gerganov 68e7ea3eab
spec : parallel drafting support (#22838)
* spec : refactor

* spec : drop support for incompatible vocabs

* spec : update common_speculative_init()

* cont : pass seq_id

* cont : dedup ctx_seq_rm_type

* server : sketch the ctx_dft decode loop

* server : draft prompt cache and checkpoints

* server : improve ctx names

* server, spec : transition to unified spec context

* cont : sync main and drft contexts

* cont : async drft eval when possible

* cont : handle non-ckpt models

* cont : pass correct n_past for drafting

* cont : process images throught the draft context

* spec : handle draft running out of context

* server : fix mtmd draft processing

* server : fix URL for draft model

* server : add comment

* server : clean-up + dry

* speculative-simple : update

* spec : fix n_past type

* server : fix slot ctx_drft ptr

* tools : update readme

* naming : improve consistency

* spec : refactor for multi-sequence speculative context

* cont : prepare params

* cont : prepare params

* spec : support parallel drafts

* server : support parallel drafting

* llama : reuse device buffers when possible

* server, spec : clean-up

* cont : clean-up

* cont : minor

* spec : reset `drafting` flag at the end

* spec : introduce `common_speculative_process()`

* spec : allow for multiple spec types (chain of speculators)

* replace old type field of type common_speculative_type in the
  common_params_speculative struct with a vector to allow multiple
  types to be specified

* introduce common_get_enabled_speculative_impls(const std::vector<enum common_speculative_type>)
  to figure out which implementations the user has enabled

* introduce common_speculative_type_from_names(const std::vector<std::string> & names)
  to parse the already user provided spec types

* all speculators run sequentially, best one wins (we verify its drafted tokens)

* maximize expected accepted tokens for current round by calculating the
  product between the probability of accepting current token (n_acc_tokens / n_gen_drafts)
  and the draft's length

---------

Co-authored-by: Petros Sideris <petros.sideris@nokia.com>
2026-05-11 19:09:43 +03:00
..
models model : fix model type check for granite/llama3 and deepseek2/glm4.7 lite (#22870) 2026-05-10 08:44:29 +02:00
CMakeLists.txt cmake: use glob to collect src/models sources (#22005) 2026-04-16 23:25:16 +02:00
llama-adapter.cpp fix: correct misspellings in code comments (#21217) 2026-03-31 13:50:51 +02:00
llama-adapter.h llama : re-enable manual LoRA adapter free (#19983) 2026-03-18 12:03:26 +02:00
llama-arch.cpp model: Add Mimo v2.5 model support (#22493) 2026-05-07 13:21:58 +02:00
llama-arch.h model: Add Mimo v2.5 model support (#22493) 2026-05-07 13:21:58 +02:00
llama-batch.cpp kv-cache : fix M-RoPE checkpoints (#20132) 2026-03-06 08:46:51 +02:00
llama-batch.h fix: correct misspellings in code comments (#21217) 2026-03-31 13:50:51 +02:00
llama-chat.cpp model : add HunyuanOCR support (#21395) 2026-04-05 23:32:14 +02:00
llama-chat.h model : add HunyuanOCR support (#21395) 2026-04-05 23:32:14 +02:00
llama-context.cpp spec : parallel drafting support (#22838) 2026-05-11 19:09:43 +03:00
llama-context.h llama : add option to save memory in device buffers (#22679) 2026-05-05 06:35:07 +03:00
llama-cparams.cpp cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188) 2025-06-15 10:08:58 +03:00
llama-cparams.h llama : enable chunked fused GDN path (#20340) 2026-03-11 22:46:40 +02:00
llama-ext.h llama-ext : fix exports (#22202) 2026-04-21 11:04:46 +03:00
llama-grammar.cpp common/grammar: fix grammar parsing issues to prevent stack overflow and hangs (#18604) 2026-03-21 18:43:35 +01:00
llama-grammar.h common/grammar : replace problematic backtracking regex [\s\S]* (#18342) 2026-01-03 16:02:43 -06:00
llama-graph.cpp graph : handle non-contiguous Q/K/V in mul_mat_aux (#22630) 2026-05-05 06:34:44 +03:00
llama-graph.h model : refactor QKV into common build_qkv and create_tensor_qkv helpers (#21245) 2026-04-16 17:41:34 +02:00
llama-hparams.cpp llama: dynamic head_dim and n_rot for SWA (#20301) 2026-03-09 22:22:39 +01:00
llama-hparams.h model: Add Mimo v2.5 model support (#22493) 2026-05-07 13:21:58 +02:00
llama-impl.cpp llama : correct platform-independent loading of BOOL metadata (#21428) 2026-04-06 01:40:38 +02:00
llama-impl.h llama : enable chunked fused GDN path (#20340) 2026-03-11 22:46:40 +02:00
llama-io.cpp server : avoid checkpoint data host copies (#22558) 2026-05-02 18:03:25 +03:00
llama-io.h llama : add option to save memory in device buffers (#22679) 2026-05-05 06:35:07 +03:00
llama-kv-cache-iswa.cpp (revert) kv-cache : do not quantize SWA KV cache (#21332) 2026-04-03 09:07:01 +03:00
llama-kv-cache-iswa.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-kv-cache.cpp ggml : implement fast walsh-hadamard transform for kv rotation (#21352) (#22631) 2026-05-05 10:05:05 +08:00
llama-kv-cache.h kv-cache : support attention rotation for heterogeneous iSWA (#21513) 2026-04-07 20:31:28 +03:00
llama-kv-cells.h llama: store mrope data in KV cell (#16825) 2025-10-29 18:09:18 +01:00
llama-memory-hybrid-iswa.cpp memory: respect unified KV cache in hybrid memory for eval tasks (#21224) 2026-04-01 12:50:17 +03:00
llama-memory-hybrid-iswa.h memory : add llama_memory_hybrid_iswa (#18601) 2026-01-21 14:30:23 +02:00
llama-memory-hybrid.cpp memory: respect unified KV cache in hybrid memory for eval tasks (#21224) 2026-04-01 12:50:17 +03:00
llama-memory-hybrid.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-memory-recurrent.cpp llama : fix device state save/load (#22805) 2026-05-07 21:43:40 +03:00
llama-memory-recurrent.h llama: consistent ctx <-> buf order for KV cache (#16746) 2025-10-28 11:23:54 +01:00
llama-memory.cpp memory : correctly handle failure in apply() (#14438) 2025-06-30 18:03:03 +03:00
llama-memory.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-mmap.cpp Update llama-mmap to use ftello/fseeko (#22497) 2026-04-30 14:17:52 -07:00
llama-mmap.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model-loader.cpp ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
llama-model-loader.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model-saver.cpp model: Add Mimo v2.5 model support (#22493) 2026-05-07 13:21:58 +02:00
llama-model-saver.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model.cpp model : fix model type check for granite/llama3 and deepseek2/glm4.7 lite (#22870) 2026-05-10 08:44:29 +02:00
llama-model.h model: move load_hparams and load_tensors to per-model definition (#22004) 2026-05-04 12:36:59 +02:00
llama-quant.cpp model: move load_hparams and load_tensors to per-model definition (#22004) 2026-05-04 12:36:59 +02:00
llama-quant.h llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
llama-sampler.cpp llama : rename llama-sampling to llama-sampler (#19363) 2026-02-06 07:26:54 +01:00
llama-sampler.h llama : rename llama-sampling to llama-sampler (#19363) 2026-02-06 07:26:54 +01:00
llama-vocab.cpp model : add sarvam_moe architecture support (#20275) 2026-05-09 16:31:50 +02:00
llama-vocab.h model : add sarvam_moe architecture support (#20275) 2026-05-09 16:31:50 +02:00
llama.cpp llama : add missing call to ggml_backend_load_all() (#22752) 2026-05-07 08:24:47 +03:00
unicode-data.cpp server : better security control for public deployments (#9776) 2024-10-08 13:27:04 +02:00
unicode-data.h llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode.cpp unicode : add custom Qwen2 regex handler to fix segfault on long input (#21257) 2026-04-07 16:13:38 +03:00
unicode.h vocab: fix Gemma4 tokenizer (#21343) 2026-04-03 10:33:03 +02:00