koboldcpp/src
Diego Devesa e072b2052e
ggml : add GGML_SCHED_NO_REALLOC option to disable reallocations in ggml_backend_sched (#17276)
* ggml : add GGML_SCHED_NO_REALLOC option to disable reallocations in ggml_backend_sched
Enabled in ggml-ci for testing.

* llama : update worst-case graph for unified cache

* ci : disable op offload in some tests

* fix spelling

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-11-28 17:33:23 +02:00
..
models model : Qwen3 Next (#16095) 2025-11-28 12:02:56 +01:00
CMakeLists.txt model : Qwen3 Next (#16095) 2025-11-28 12:02:56 +01:00
llama-adapter.cpp
llama-adapter.h
llama-arch.cpp model : Qwen3 Next (#16095) 2025-11-28 12:02:56 +01:00
llama-arch.h model : Qwen3 Next (#16095) 2025-11-28 12:02:56 +01:00
llama-batch.cpp
llama-batch.h
llama-chat.cpp
llama-chat.h
llama-context.cpp ggml : add GGML_SCHED_NO_REALLOC option to disable reallocations in ggml_backend_sched (#17276) 2025-11-28 17:33:23 +02:00
llama-context.h
llama-cparams.cpp
llama-cparams.h
llama-grammar.cpp grammar: fix regression caused by #17381 (#17412) 2025-11-20 18:35:10 +01:00
llama-grammar.h
llama-graph.cpp ggml : add ggml_top_k (#17365) 2025-11-25 15:31:43 +02:00
llama-graph.h
llama-hparams.cpp
llama-hparams.h model : Qwen3 Next (#16095) 2025-11-28 12:02:56 +01:00
llama-impl.cpp common : more accurate sampling timing (#17382) 2025-11-20 13:40:10 +02:00
llama-impl.h
llama-io.cpp
llama-io.h
llama-kv-cache-iswa.cpp
llama-kv-cache-iswa.h
llama-kv-cache.cpp
llama-kv-cache.h
llama-kv-cells.h
llama-memory-hybrid.cpp
llama-memory-hybrid.h
llama-memory-recurrent.cpp
llama-memory-recurrent.h
llama-memory.cpp
llama-memory.h
llama-mmap.cpp
llama-mmap.h
llama-model-loader.cpp
llama-model-loader.h
llama-model-saver.cpp
llama-model-saver.h
llama-model.cpp model : Qwen3 Next (#16095) 2025-11-28 12:02:56 +01:00
llama-model.h model : Qwen3 Next (#16095) 2025-11-28 12:02:56 +01:00
llama-quant.cpp model : Qwen3 Next (#16095) 2025-11-28 12:02:56 +01:00
llama-quant.h
llama-sampling.cpp common : more accurate sampling timing (#17382) 2025-11-20 13:40:10 +02:00
llama-sampling.h
llama-vocab.cpp vocab : call reserve() for building plamo-2-translate suffix (#17343) 2025-11-18 18:58:22 +01:00
llama-vocab.h
llama.cpp
unicode-data.cpp
unicode-data.h
unicode.cpp
unicode.h