koboldcpp/src
Concedo 8c701d7ded Merge commit '72b090da2c' into concedo_experimental
# Conflicts:
#	docs/backend/CANN.md
#	docs/function-calling.md
#	examples/embedding/embedding.cpp
#	examples/retrieval/retrieval.cpp
#	ggml/src/ggml-cann/CMakeLists.txt
#	ggml/src/ggml-cann/Doxyfile
#	ggml/src/ggml-cann/acl_tensor.cpp
#	ggml/src/ggml-cann/acl_tensor.h
#	ggml/src/ggml-cann/aclnn_ops.cpp
#	ggml/src/ggml-cann/aclnn_ops.h
#	ggml/src/ggml-cann/common.h
#	ggml/src/ggml-cann/ggml-cann.cpp
#	ggml/src/ggml-cpu/CMakeLists.txt
#	ggml/src/ggml-sycl/binbcast.cpp
#	ggml/src/ggml-sycl/common.hpp
#	ggml/src/ggml-sycl/concat.cpp
#	ggml/src/ggml-sycl/conv.cpp
#	ggml/src/ggml-sycl/cpy.cpp
#	ggml/src/ggml-sycl/dmmv.cpp
#	ggml/src/ggml-sycl/element_wise.cpp
#	ggml/src/ggml-sycl/getrows.cpp
#	ggml/src/ggml-sycl/ggml-sycl.cpp
#	ggml/src/ggml-sycl/gla.cpp
#	ggml/src/ggml-sycl/mmvq.cpp
#	ggml/src/ggml-sycl/norm.cpp
#	ggml/src/ggml-sycl/outprod.cpp
#	ggml/src/ggml-sycl/rope.cpp
#	ggml/src/ggml-sycl/softmax.cpp
#	ggml/src/ggml-sycl/tsembd.cpp
#	ggml/src/ggml-sycl/wkv.cpp
#	scripts/compare-commits.sh
#	tests/test-chat.cpp
#	tests/test-sampling.cpp
2025-05-28 00:28:41 +08:00
..
llama-adapter.cpp llama : do not crash if there is no CPU backend (#13395) 2025-05-09 13:02:07 +02:00
llama-adapter.h llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-arch.cpp model : Granite MoE shared (#13269) 2025-05-13 15:12:01 +02:00
llama-arch.h model : Nomic Embed Text V2 with Mixture-of-Experts (MoE) architecture (#12466) 2025-04-28 22:52:15 +03:00
llama-batch.cpp kv-cache : simplify the interface (#13660) 2025-05-21 15:11:13 +03:00
llama-batch.h kv-cache : separate recurrent vs non-recurrent impl (#12799) 2025-05-02 17:48:36 +03:00
llama-chat.cpp llama : one-off chat template fix for Mistral-Small-2503 (#13398) 2025-05-09 11:17:51 +02:00
llama-chat.h llama : one-off chat template fix for Mistral-Small-2503 (#13398) 2025-05-09 11:17:51 +02:00
llama-context.cpp Merge commit '72b090da2c' into concedo_experimental 2025-05-28 00:28:41 +08:00
llama-context.h llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
llama-cparams.cpp kv-cache : rework kv_cell (#13706) 2025-05-25 16:34:36 +03:00
llama-cparams.h kv-cache : rework kv_cell (#13706) 2025-05-25 16:34:36 +03:00
llama-grammar.cpp server: streaming of tool calls and thoughts when --jinja is on (#12379) 2025-05-25 01:48:08 +01:00
llama-grammar.h tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 2025-03-05 13:05:13 +00:00
llama-graph.cpp fix to allow all EOGs to trigger a stop, occam's glm4 fix, 2025-05-24 22:55:11 +08:00
llama-graph.h kv-cache : add SWA support (#13194) 2025-05-20 08:05:46 +03:00
llama-hparams.cpp hparams : initialize arrays (#13728) 2025-05-23 20:16:13 +03:00
llama-hparams.h hparams : initialize arrays (#13728) 2025-05-23 20:16:13 +03:00
llama-impl.cpp GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
llama-impl.h cleanup: fix compile warnings associated with gnu_printf (#11811) 2025-02-12 10:06:53 -04:00
llama-io.cpp llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-io.h llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-kv-cache.cpp Merge commit '72b090da2c' into concedo_experimental 2025-05-28 00:28:41 +08:00
llama-kv-cache.h kv-cells : track min/max used cells and per-sequence positions (#13808) 2025-05-27 13:49:41 +03:00
llama-kv-cells.h kv-cells : track min/max used cells and per-sequence positions (#13808) 2025-05-27 13:49:41 +03:00
llama-memory.cpp llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-memory.h kv-cache : rework kv_cell (#13706) 2025-05-25 16:34:36 +03:00
llama-mmap.cpp Merge branch 'upstream' into concedo_experimental 2025-03-26 00:18:01 +08:00
llama-mmap.h llama-mmap: fix missing include (#11796) 2025-02-10 20:58:18 +02:00
llama-model-loader.cpp Merge branch 'upstream' into concedo_experimental 2025-05-16 15:30:31 +08:00
llama-model-loader.h llama : add option to override model tensor buffers (#11397) 2025-04-02 14:52:01 +02:00
llama-model-saver.cpp llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
llama-model-saver.h llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
llama-model.cpp Merge commit 'e121edc432' into concedo_experimental 2025-05-28 00:20:45 +08:00
llama-model.h kv-cache : add SWA support (#13194) 2025-05-20 08:05:46 +03:00
llama-quant.cpp Merge branch 'upstream' into concedo_experimental 2025-05-16 15:30:31 +08:00
llama-quant.h llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
llama-sampling.cpp sampling : make sure samplers return at least 1 token (#13822) 2025-05-27 12:07:52 +03:00
llama-sampling.h llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llama-vocab.cpp removed unnecessary function 2025-05-24 23:59:31 +08:00
llama-vocab.h fix to allow all EOGs to trigger a stop, occam's glm4 fix, 2025-05-24 22:55:11 +08:00
llama.cpp Merge branch 'upstream' into concedo_experimental 2025-05-18 23:27:53 +08:00
unicode-data.cpp server : better security control for public deployments (#9776) 2024-10-08 13:27:04 +02:00
unicode-data.h llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode.cpp Merge branch 'upstream' into concedo_experimental 2025-02-16 02:08:39 +08:00
unicode.h unicode : improve naming style (#10838) 2024-12-16 12:31:45 +02:00