koboldcpp/src
Concedo cc82c3164e Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.devops/intel.Dockerfile
#	.github/workflows/build-cross.yml
#	.github/workflows/build-sycl.yml
#	.github/workflows/build.yml
#	.github/workflows/editorconfig.yml
#	.github/workflows/release.yml
#	cmake/riscv64-spacemit-linux-gnu-gcc.cmake
#	docs/backend/OPENVINO.md
#	docs/backend/SYCL.md
#	docs/build-riscv64-spacemit.md
#	docs/ops.md
#	docs/ops/WebGPU.csv
#	embd_res/ggml-vocab-qwen35.gguf
#	embd_res/ggml-vocab-qwen35.gguf.inp
#	embd_res/ggml-vocab-qwen35.gguf.out
#	examples/model-conversion/Makefile
#	ggml/CMakeLists.txt
#	ggml/src/ggml-cpu/CMakeLists.txt
#	ggml/src/ggml-hexagon/ggml-hexagon.cpp
#	ggml/src/ggml-hexagon/htp/hmx-flash-attn-ops.c
#	ggml/src/ggml-hexagon/htp/hmx-matmul-ops.c
#	ggml/src/ggml-hexagon/htp/hmx-utils.h
#	ggml/src/ggml-hexagon/htp/htp-ops.h
#	ggml/src/ggml-hexagon/htp/hvx-utils.h
#	ggml/src/ggml-hexagon/htp/main.c
#	ggml/src/ggml-hexagon/htp/unary-ops.c
#	ggml/src/ggml-opencl/CMakeLists.txt
#	ggml/src/ggml-opencl/ggml-opencl.cpp
#	ggml/src/ggml-opencl/kernels/cvt.cl
#	ggml/src/ggml-sycl/CMakeLists.txt
#	ggml/src/ggml-sycl/common.cpp
#	ggml/src/ggml-sycl/common.hpp
#	ggml/src/ggml-sycl/ggml-sycl.cpp
#	ggml/src/ggml-webgpu/ggml-webgpu-shader-lib.hpp
#	ggml/src/ggml-webgpu/ggml-webgpu.cpp
#	ggml/src/ggml-webgpu/wgsl-shaders/common_decls.tmpl
#	ggml/src/ggml-webgpu/wgsl-shaders/flash_attn_tile.wgsl
#	ggml/src/ggml-webgpu/wgsl-shaders/flash_attn_vec_reduce.wgsl
#	ggml/src/ggml-webgpu/wgsl-shaders/flash_attn_vec_split.wgsl
#	ggml/src/ggml-webgpu/wgsl-shaders/get_rows.wgsl
#	ggml/src/ggml-webgpu/wgsl-shaders/mul_mat_decls.tmpl
#	ggml/src/ggml-webgpu/wgsl-shaders/mul_mat_vec_acc.tmpl
#	ggml/src/ggml-webgpu/wgsl-shaders/unary.wgsl
#	ggml/src/ggml-zendnn/CMakeLists.txt
#	ggml/src/ggml-zendnn/ggml-zendnn.cpp
#	scripts/snapdragon/adb/run-completion.sh
#	tests/CMakeLists.txt
#	tools/cli/README.md
#	tools/completion/README.md
#	tools/mtmd/clip-impl.h
#	tools/mtmd/clip.cpp
#	tools/mtmd/clip.h
#	tools/server/README.md
2026-05-14 19:04:04 +08:00
..
models Merge branch 'upstream' into concedo_experimental 2026-05-11 16:18:28 +08:00
llama-adapter.cpp fix: correct misspellings in code comments (#21217) 2026-03-31 13:50:51 +02:00
llama-adapter.h llama : re-enable manual LoRA adapter free (#19983) 2026-03-18 12:03:26 +02:00
llama-arch.cpp model: Add Mimo v2.5 model support (#22493) 2026-05-07 13:21:58 +02:00
llama-arch.h model: Add Mimo v2.5 model support (#22493) 2026-05-07 13:21:58 +02:00
llama-batch.cpp kv-cache : fix M-RoPE checkpoints (#20132) 2026-03-06 08:46:51 +02:00
llama-batch.h fix: correct misspellings in code comments (#21217) 2026-03-31 13:50:51 +02:00
llama-chat.cpp model : add HunyuanOCR support (#21395) 2026-04-05 23:32:14 +02:00
llama-chat.h model : add HunyuanOCR support (#21395) 2026-04-05 23:32:14 +02:00
llama-context.cpp need to fix cuda compile. Merge branch 'upstream' into concedo_experimental 2026-05-12 20:47:07 +08:00
llama-context.h llama : add option to save memory in device buffers (#22679) 2026-05-05 06:35:07 +03:00
llama-cparams.cpp cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188) 2025-06-15 10:08:58 +03:00
llama-cparams.h llama : enable chunked fused GDN path (#20340) 2026-03-11 22:46:40 +02:00
llama-ext.h llama-ext : fix exports (#22202) 2026-04-21 11:04:46 +03:00
llama-grammar.cpp Merge branch 'upstream' into concedo_experimental 2026-03-22 23:39:13 +08:00
llama-grammar.h common/grammar : replace problematic backtracking regex [\s\S]* (#18342) 2026-01-03 16:02:43 -06:00
llama-graph.cpp Merge remote-tracking branch 'origin/upstream' into concedo_experimental 2026-05-06 21:20:06 +08:00
llama-graph.h model : refactor QKV into common build_qkv and create_tensor_qkv helpers (#21245) 2026-04-16 17:41:34 +02:00
llama-hparams.cpp llama: dynamic head_dim and n_rot for SWA (#20301) 2026-03-09 22:22:39 +01:00
llama-hparams.h model: Add Mimo v2.5 model support (#22493) 2026-05-07 13:21:58 +02:00
llama-impl.cpp Merge branch 'upstream' into concedo_experimental 2026-04-06 20:56:02 +08:00
llama-impl.h llama : enable chunked fused GDN path (#20340) 2026-03-11 22:46:40 +02:00
llama-io.cpp server : avoid checkpoint data host copies (#22558) 2026-05-02 18:03:25 +03:00
llama-io.h llama : add option to save memory in device buffers (#22679) 2026-05-05 06:35:07 +03:00
llama-kv-cache-iswa.cpp handle SWA conflicting with rewind, increased default SWA padding. 2026-04-16 17:00:26 +08:00
llama-kv-cache-iswa.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-kv-cache.cpp Merge remote-tracking branch 'origin/upstream' into concedo_experimental 2026-05-06 21:20:06 +08:00
llama-kv-cache.h kv-cache : support attention rotation for heterogeneous iSWA (#21513) 2026-04-07 20:31:28 +03:00
llama-kv-cells.h llama: store mrope data in KV cell (#16825) 2025-10-29 18:09:18 +01:00
llama-memory-hybrid-iswa.cpp memory: respect unified KV cache in hybrid memory for eval tasks (#21224) 2026-04-01 12:50:17 +03:00
llama-memory-hybrid-iswa.h memory : add llama_memory_hybrid_iswa (#18601) 2026-01-21 14:30:23 +02:00
llama-memory-hybrid.cpp memory: respect unified KV cache in hybrid memory for eval tasks (#21224) 2026-04-01 12:50:17 +03:00
llama-memory-hybrid.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-memory-recurrent.cpp Merge commit '66001722aa' into concedo_experimental 2026-05-11 15:40:10 +08:00
llama-memory-recurrent.h llama: consistent ctx <-> buf order for KV cache (#16746) 2025-10-28 11:23:54 +01:00
llama-memory.cpp memory : correctly handle failure in apply() (#14438) 2025-06-30 18:03:03 +03:00
llama-memory.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-mmap.cpp Merge branch 'upstream' into concedo_experimental 2026-05-02 18:07:50 +08:00
llama-mmap.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model-loader.cpp Merge commit '935a340292' into concedo_experimental 2026-05-06 21:02:25 +08:00
llama-model-loader.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model-saver.cpp model: Add Mimo v2.5 model support (#22493) 2026-05-07 13:21:58 +02:00
llama-model-saver.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model.cpp Merge branch 'upstream' into concedo_experimental 2026-05-11 16:18:28 +08:00
llama-model.h model: move load_hparams and load_tensors to per-model definition (#22004) 2026-05-04 12:36:59 +02:00
llama-quant.cpp Merge commit '935a340292' into concedo_experimental 2026-05-06 21:02:25 +08:00
llama-quant.h llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
llama-sampler.cpp llama : rename llama-sampling to llama-sampler (#19363) 2026-02-06 07:26:54 +01:00
llama-sampler.h llama : rename llama-sampling to llama-sampler (#19363) 2026-02-06 07:26:54 +01:00
llama-vocab.cpp Merge branch 'upstream' into concedo_experimental 2026-05-11 16:18:28 +08:00
llama-vocab.h Merge branch 'upstream' into concedo_experimental 2026-05-11 16:18:28 +08:00
llama.cpp Merge branch 'upstream' into concedo_experimental 2026-05-08 14:48:57 +08:00
unicode-data.cpp
unicode-data.h
unicode.cpp unicode,test: add Qwen3.5 non-backtracking tokenizer handler and regr… (#22110) 2026-05-14 11:03:40 +02:00
unicode.h vocab: fix Gemma4 tokenizer (#21343) 2026-04-03 10:33:03 +02:00