koboldcpp/include
Concedo 754fef5204 Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.devops/cuda.Dockerfile
#	.devops/musa.Dockerfile
#	.github/workflows/build.yml
#	README.md
#	docs/docker.md
#	examples/imatrix/imatrix.cpp
#	examples/llama-bench/llama-bench.cpp
#	examples/main/README.md
#	examples/perplexity/perplexity.cpp
#	examples/server/README.md
#	ggml/src/ggml-cpu/ggml-cpu.c
#	ggml/src/ggml-cuda/CMakeLists.txt
#	models/templates/deepseek-ai-DeepSeek-R1-Distill-Llama-8B.jinja
#	models/templates/deepseek-ai-DeepSeek-R1-Distill-Qwen-32B.jinja
#	scripts/get_chat_template.py
#	scripts/sync-ggml.last
#	tests/test-chat.cpp
#	tests/test-gguf.cpp
#	tests/test-sampling.cpp
2025-02-15 00:49:46 +08:00
..
CL wip dont use 2023-04-21 00:35:54 +08:00
vulkan merge checkpoint 2 - functional merge without q4_0_4_4 (need regen shaders) 2024-12-13 17:04:19 +08:00
cblas.h wip dont use 2023-04-21 00:35:54 +08:00
clblast.h Revert "clblast up ver" 2024-02-21 14:35:38 +08:00
clblast_c.h Revert "clblast up ver" 2024-02-21 14:35:38 +08:00
clblast_half.h upgraded clblast 2023-05-25 10:18:12 +08:00
clblast_netlib_c.h Not working, don't use. testing a merge 2023-05-16 12:33:24 +08:00
llama-cpp.h llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llama.h Merge branch 'upstream' into concedo_experimental 2025-02-15 00:49:46 +08:00
openblas_config.h wip dont use 2023-04-21 00:35:54 +08:00