koboldcpp/src
Concedo 1f803ae27b Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.github/workflows/server.yml
#	CMakeLists.txt
#	cmake/common.cmake
#	ggml/src/ggml-virtgpu/apir_cs_ggml-rpc-front.cpp
#	ggml/src/ggml-virtgpu/backend/backend-dispatched-backend.cpp
#	ggml/src/ggml-virtgpu/backend/backend-dispatched-buffer-type.cpp
#	ggml/src/ggml-virtgpu/backend/backend-dispatched-buffer.cpp
#	ggml/src/ggml-virtgpu/backend/backend-dispatched-device.cpp
#	ggml/src/ggml-virtgpu/backend/backend-dispatched.cpp
#	ggml/src/ggml-virtgpu/backend/backend-dispatched.gen.h
#	ggml/src/ggml-virtgpu/backend/backend-dispatched.h
#	ggml/src/ggml-virtgpu/backend/backend.cpp
#	ggml/src/ggml-virtgpu/backend/shared/apir_cs.h
#	ggml/src/ggml-virtgpu/backend/shared/apir_cs_ggml.h
#	ggml/src/ggml-virtgpu/ggml-backend-buffer-type.cpp
#	ggml/src/ggml-virtgpu/ggml-backend-device.cpp
#	ggml/src/ggml-virtgpu/ggml-backend-reg.cpp
#	ggml/src/ggml-virtgpu/ggml-remoting.h
#	ggml/src/ggml-virtgpu/ggmlremoting_functions.yaml
#	ggml/src/ggml-virtgpu/regenerate_remoting.py
#	ggml/src/ggml-virtgpu/virtgpu-forward-backend.cpp
#	ggml/src/ggml-virtgpu/virtgpu-forward-buffer-type.cpp
#	ggml/src/ggml-virtgpu/virtgpu-forward-buffer.cpp
#	ggml/src/ggml-virtgpu/virtgpu-forward-device.cpp
#	ggml/src/ggml-virtgpu/virtgpu-forward-impl.h
#	ggml/src/ggml-virtgpu/virtgpu-forward.gen.h
#	ggml/src/ggml-virtgpu/virtgpu-shm.cpp
#	ggml/src/ggml-virtgpu/virtgpu.cpp
#	ggml/src/ggml-virtgpu/virtgpu.h
2026-02-04 16:21:06 +08:00
..
models models : remove unnecessary cont in openelm (#19289) 2026-02-03 14:20:57 +01:00
llama-adapter.cpp lora: make sure model keep track of associated adapters (#18490) 2026-01-15 10:24:28 +01:00
llama-adapter.h lora: make sure model keep track of associated adapters (#18490) 2026-01-15 10:24:28 +01:00
llama-arch.cpp model : add EXAONE MoE (#18543) 2026-01-13 23:28:38 +01:00
llama-arch.h model : add EXAONE MoE (#18543) 2026-01-13 23:28:38 +01:00
llama-batch.cpp batch : fix sequence id ownership (#17915) 2025-12-11 14:29:47 +02:00
llama-batch.h batch : fix sequence id ownership (#17915) 2025-12-11 14:29:47 +02:00
llama-chat.cpp docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
llama-chat.h model : add EXAONE MoE (#18543) 2026-01-13 23:28:38 +01:00
llama-context.cpp Merge branch 'upstream' into concedo_experimental 2026-02-04 16:21:06 +08:00
llama-context.h sampling : remove sampling branching in output_reserve (#18811) 2026-01-28 05:59:30 +01:00
llama-cparams.cpp cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188) 2025-06-15 10:08:58 +03:00
llama-cparams.h context : reserve new scheduler when graph topology changes (#18547) 2026-01-15 16:39:17 +02:00
llama-grammar.cpp Merge branch 'upstream' into concedo_experimental 2026-01-04 11:14:33 +08:00
llama-grammar.h common/grammar : replace problematic backtracking regex [\s\S]* (#18342) 2026-01-03 16:02:43 -06:00
llama-graph.cpp sampling : delegate input allocation to the scheduler (#19266) 2026-02-03 22:16:16 +02:00
llama-graph.h kv-cache : support V-less cache (#19067) 2026-01-25 15:48:56 +02:00
llama-hparams.cpp kv-cache : support V-less cache (#19067) 2026-01-25 15:48:56 +02:00
llama-hparams.h docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
llama-impl.cpp Merge commit '4a4f7e6550' into concedo_experimental 2025-12-17 14:30:39 +08:00
llama-impl.h ggml, llama : use defaulted constructors/destructors (#17649) 2025-12-03 07:12:18 +01:00
llama-io.cpp llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-io.h llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-kv-cache-iswa.cpp still not really working right 2025-11-09 01:57:48 +08:00
llama-kv-cache-iswa.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-kv-cache.cpp Merge branch 'upstream' into concedo_experimental 2026-01-30 20:37:37 +08:00
llama-kv-cache.h kv-cache : optimize KQ mask construction (#18842) 2026-01-17 15:42:42 +02:00
llama-kv-cells.h llama: store mrope data in KV cell (#16825) 2025-10-29 18:09:18 +01:00
llama-memory-hybrid-iswa.cpp memory : add llama_memory_hybrid_iswa (#18601) 2026-01-21 14:30:23 +02:00
llama-memory-hybrid-iswa.h memory : add llama_memory_hybrid_iswa (#18601) 2026-01-21 14:30:23 +02:00
llama-memory-hybrid.cpp graph : reuse SSM graphs (#16490) 2025-12-16 09:36:21 +02:00
llama-memory-hybrid.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-memory-recurrent.cpp Merge branch 'upstream' into concedo_experimental 2026-02-01 22:35:25 +08:00
llama-memory-recurrent.h llama: consistent ctx <-> buf order for KV cache (#16746) 2025-10-28 11:23:54 +01:00
llama-memory.cpp memory : correctly handle failure in apply() (#14438) 2025-06-30 18:03:03 +03:00
llama-memory.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-mmap.cpp Merge commit '12a4a47e6a' into concedo_experimental 2026-01-21 21:00:44 +08:00
llama-mmap.h llama : add use_direct_io flag for model loading (#18166) 2026-01-08 08:35:30 +02:00
llama-model-loader.cpp Merge commit '88d23ad515' into concedo_experimental 2026-01-29 22:25:56 +08:00
llama-model-loader.h llama : add use_direct_io flag for model loading (#18166) 2026-01-08 08:35:30 +02:00
llama-model-saver.cpp kv-cache : support V-less cache (#19067) 2026-01-25 15:48:56 +02:00
llama-model-saver.h llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
llama-model.cpp Merge commit '88d23ad515' into concedo_experimental 2026-01-29 22:25:56 +08:00
llama-model.h lora: make sure model keep track of associated adapters (#18490) 2026-01-15 10:24:28 +01:00
llama-quant.cpp Merge commit '88d23ad515' into concedo_experimental 2026-01-29 22:25:56 +08:00
llama-quant.h
llama-sampling.cpp sampling : delegate input allocation to the scheduler (#19266) 2026-02-03 22:16:16 +02:00
llama-sampling.h sampling : add support for backend sampling (#17004) 2026-01-04 22:22:16 +02:00
llama-vocab.cpp Merge branch 'upstream' into concedo_experimental 2026-02-03 19:00:42 +08:00
llama-vocab.h Merge commit '3e4bb29666' into concedo_experimental 2026-01-16 17:55:22 +08:00
llama.cpp Merge branch 'upstream' into concedo_experimental 2026-01-27 23:06:13 +08:00
unicode-data.cpp
unicode-data.h
unicode.cpp Merge branch 'upstream' into concedo_experimental 2026-01-02 11:05:20 +08:00
unicode.h devops: add s390x & ppc64le CI (#15925) 2025-09-27 02:03:33 +08:00