koboldcpp/src
Concedo 7f618454ff Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.github/labeler.yml
#	CODEOWNERS
#	docs/backend/OPENCL.md
#	docs/ops.md
#	docs/ops/CANN.csv
#	docs/ops/WebGPU.csv
#	ggml/src/ggml-blas/CMakeLists.txt
#	ggml/src/ggml-opencl/kernels/mul_mv_q6_k.cl
#	ggml/src/ggml-webgpu/ggml-webgpu-shader-lib.hpp
#	ggml/src/ggml-webgpu/ggml-webgpu.cpp
#	ggml/src/ggml-webgpu/wgsl-shaders/cpy.tmpl.wgsl
#	ggml/src/ggml-webgpu/wgsl-shaders/set_rows.wgsl
#	tests/test-backend-ops.cpp
2026-01-18 23:24:29 +08:00
..
models model : add EXAONE MoE (#18543) 2026-01-13 23:28:38 +01:00
llama-adapter.cpp lora: make sure model keep track of associated adapters (#18490) 2026-01-15 10:24:28 +01:00
llama-adapter.h lora: make sure model keep track of associated adapters (#18490) 2026-01-15 10:24:28 +01:00
llama-arch.cpp model : add EXAONE MoE (#18543) 2026-01-13 23:28:38 +01:00
llama-arch.h model : add EXAONE MoE (#18543) 2026-01-13 23:28:38 +01:00
llama-batch.cpp batch : fix sequence id ownership (#17915) 2025-12-11 14:29:47 +02:00
llama-batch.h batch : fix sequence id ownership (#17915) 2025-12-11 14:29:47 +02:00
llama-chat.cpp model : add EXAONE MoE (#18543) 2026-01-13 23:28:38 +01:00
llama-chat.h model : add EXAONE MoE (#18543) 2026-01-13 23:28:38 +01:00
llama-context.cpp embeddings memory usage regression fix 2026-01-18 16:26:52 +08:00
llama-context.h context : reserve new scheduler when graph topology changes (#18547) 2026-01-15 16:39:17 +02:00
llama-cparams.cpp cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188) 2025-06-15 10:08:58 +03:00
llama-cparams.h context : reserve new scheduler when graph topology changes (#18547) 2026-01-15 16:39:17 +02:00
llama-grammar.cpp Merge branch 'upstream' into concedo_experimental 2026-01-04 11:14:33 +08:00
llama-grammar.h common/grammar : replace problematic backtracking regex [\s\S]* (#18342) 2026-01-03 16:02:43 -06:00
llama-graph.cpp graph : clean up t5 input builders (#18795) 2026-01-13 09:43:51 +01:00
llama-graph.h sampling : add support for backend sampling (#17004) 2026-01-04 22:22:16 +02:00
llama-hparams.cpp kv-cache : optimize KQ mask construction (#18842) 2026-01-17 15:42:42 +02:00
llama-hparams.h kv-cache : optimize KQ mask construction (#18842) 2026-01-17 15:42:42 +02:00
llama-impl.cpp Merge commit '4a4f7e6550' into concedo_experimental 2025-12-17 14:30:39 +08:00
llama-impl.h ggml, llama : use defaulted constructors/destructors (#17649) 2025-12-03 07:12:18 +01:00
llama-io.cpp llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-io.h llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-kv-cache-iswa.cpp still not really working right 2025-11-09 01:57:48 +08:00
llama-kv-cache-iswa.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-kv-cache.cpp Merge branch 'upstream' into concedo_experimental 2026-01-18 23:24:29 +08:00
llama-kv-cache.h kv-cache : optimize KQ mask construction (#18842) 2026-01-17 15:42:42 +02:00
llama-kv-cells.h llama: store mrope data in KV cell (#16825) 2025-10-29 18:09:18 +01:00
llama-memory-hybrid.cpp graph : reuse SSM graphs (#16490) 2025-12-16 09:36:21 +02:00
llama-memory-hybrid.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-memory-recurrent.cpp Merge branch 'upstream' into concedo_experimental 2025-11-11 17:10:11 +08:00
llama-memory-recurrent.h llama: consistent ctx <-> buf order for KV cache (#16746) 2025-10-28 11:23:54 +01:00
llama-memory.cpp memory : correctly handle failure in apply() (#14438) 2025-06-30 18:03:03 +03:00
llama-memory.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-mmap.cpp Merge commit '3e4bb29666' into concedo_experimental 2026-01-16 17:55:22 +08:00
llama-mmap.h llama : add use_direct_io flag for model loading (#18166) 2026-01-08 08:35:30 +02:00
llama-model-loader.cpp Merge commit '2a13180100' into concedo_experimental 2026-01-16 21:52:01 +08:00
llama-model-loader.h llama : add use_direct_io flag for model loading (#18166) 2026-01-08 08:35:30 +02:00
llama-model-saver.cpp model : add LFM2-ColBert-350M (#18607) 2026-01-05 19:52:56 +01:00
llama-model-saver.h llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
llama-model.cpp Merge branch 'upstream' into concedo_experimental 2026-01-17 00:41:28 +08:00
llama-model.h lora: make sure model keep track of associated adapters (#18490) 2026-01-15 10:24:28 +01:00
llama-quant.cpp Merge branch 'upstream' into concedo_experimental 2026-01-09 01:23:10 +08:00
llama-quant.h llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
llama-sampling.cpp llama : add adaptive-p sampler (#17927) 2026-01-15 19:16:29 +02:00
llama-sampling.h sampling : add support for backend sampling (#17004) 2026-01-04 22:22:16 +02:00
llama-vocab.cpp Merge commit '3e4bb29666' into concedo_experimental 2026-01-16 17:55:22 +08:00
llama-vocab.h Merge commit '3e4bb29666' into concedo_experimental 2026-01-16 17:55:22 +08:00
llama.cpp Merge commit 'a61c8bc3bf' into concedo_experimental 2026-01-13 23:06:50 +08:00
unicode-data.cpp server : better security control for public deployments (#9776) 2024-10-08 13:27:04 +02:00
unicode-data.h llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode.cpp Merge branch 'upstream' into concedo_experimental 2026-01-02 11:05:20 +08:00
unicode.h devops: add s390x & ppc64le CI (#15925) 2025-09-27 02:03:33 +08:00