koboldcpp/src
Concedo 1c41c38a6a Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.devops/cuda.Dockerfile
#	CODEOWNERS
#	README.md
#	ggml/src/ggml-cann/aclnn_ops.cpp
#	ggml/src/ggml-cann/common.h
#	ggml/src/ggml-opencl/ggml-opencl.cpp
#	scripts/sync-ggml-am.sh
#	scripts/sync-ggml.last
#	scripts/sync-ggml.sh
#	tests/test-chat.cpp
#	tools/batched-bench/batched-bench.cpp
#	tools/mtmd/clip.h
2025-08-20 20:34:45 +08:00
..
llama-adapter.cpp llama : do not crash if there is no CPU backend (#13395) 2025-05-09 13:02:07 +02:00
llama-adapter.h llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-arch.cpp llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
llama-arch.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
llama-batch.cpp hope i didnt break anything 2025-08-14 21:42:24 +08:00
llama-batch.h llama : reuse compute graphs (#14482) 2025-07-17 19:08:33 +03:00
llama-chat.cpp chat : fix yandex chat template (#15116) 2025-08-06 13:26:49 +02:00
llama-chat.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
llama-context.cpp Merge branch 'upstream' into concedo_experimental 2025-08-20 20:34:45 +08:00
llama-context.h server : add SWA checkpoints (#15293) 2025-08-14 14:59:50 +03:00
llama-cparams.cpp cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188) 2025-06-15 10:08:58 +03:00
llama-cparams.h llama : add high-throughput mode (#14363) 2025-07-16 16:35:42 +03:00
llama-grammar.cpp Add memoized cache to llama_grammar_reject_candidates_for_stack (#1615) 2025-06-25 19:22:19 +08:00
llama-grammar.h tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 2025-03-05 13:05:13 +00:00
llama-graph.cpp llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
llama-graph.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
llama-hparams.cpp model : add support for SmallThinker series (#14898) 2025-07-28 13:47:00 +02:00
llama-hparams.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
llama-impl.cpp extend log 2025-06-26 18:52:44 +08:00
llama-impl.h cleanup: fix compile warnings associated with gnu_printf (#11811) 2025-02-12 10:06:53 -04:00
llama-io.cpp llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-io.h llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-kv-cache-unified-iswa.cpp hope i didnt break anything 2025-08-14 21:42:24 +08:00
llama-kv-cache-unified-iswa.h server : add SWA checkpoints (#15293) 2025-08-14 14:59:50 +03:00
llama-kv-cache-unified.cpp hope i didnt break anything 2025-08-14 21:42:24 +08:00
llama-kv-cache-unified.h server : add SWA checkpoints (#15293) 2025-08-14 14:59:50 +03:00
llama-kv-cells.h kv-cache : use ggml_set_rows (#14285) 2025-07-03 10:53:35 +03:00
llama-memory-hybrid.cpp server : add SWA checkpoints (#15293) 2025-08-14 14:59:50 +03:00
llama-memory-hybrid.h server : add SWA checkpoints (#15293) 2025-08-14 14:59:50 +03:00
llama-memory-recurrent.cpp hope i didnt break anything 2025-08-14 21:42:24 +08:00
llama-memory-recurrent.h server : add SWA checkpoints (#15293) 2025-08-14 14:59:50 +03:00
llama-memory.cpp memory : correctly handle failure in apply() (#14438) 2025-06-30 18:03:03 +03:00
llama-memory.h server : add SWA checkpoints (#15293) 2025-08-14 14:59:50 +03:00
llama-mmap.cpp readjusted mistral and oai template, fixed compile issue on termux, updated lite, show generated token ids in debug mode 2025-08-07 21:14:48 +08:00
llama-mmap.h llama-mmap: fix missing include (#11796) 2025-02-10 20:58:18 +02:00
llama-model-loader.cpp Merge branch 'upstream' into concedo_experimental 2025-08-06 10:51:29 +08:00
llama-model-loader.h model: support GLM 4.5 family of models (#14939) 2025-08-04 20:29:25 +02:00
llama-model-saver.cpp llama : improve sep token handling (#14272) 2025-06-20 14:04:09 +02:00
llama-model-saver.h llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
llama-model.cpp Merge branch 'upstream' into concedo_experimental 2025-08-20 20:34:45 +08:00
llama-model.h model : add gpt-oss type strings (#15424) 2025-08-19 19:58:28 +03:00
llama-quant.cpp should fix vulkan bsod 2025-08-08 10:57:50 +08:00
llama-quant.h llama : refactor src/llama.cpp (#10902) 2025-01-03 10:18:53 +02:00
llama-sampling.cpp sampling : make sure samplers return at least 1 token (#13822) 2025-05-27 12:07:52 +03:00
llama-sampling.h llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llama-vocab.cpp Merge branch 'upstream' into concedo_experimental 2025-08-16 12:39:25 +08:00
llama-vocab.h Merge branch 'upstream' into concedo_experimental 2025-08-02 10:25:10 +08:00
llama.cpp Merge commit '456af35eb7' into concedo_experimental 2025-06-20 23:41:27 +08:00
unicode-data.cpp server : better security control for public deployments (#9776) 2024-10-08 13:27:04 +02:00
unicode-data.h llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode.cpp Merge branch 'upstream' into concedo_experimental 2025-07-16 12:03:54 +08:00
unicode.h model : add Kimi-K2 support (#14654) 2025-07-15 21:54:22 +02:00