koboldcpp/include
Concedo f7923b261f need to fix cuda compile. Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.github/workflows/python-type-check.yml
#	examples/speculative-simple/README.md
#	examples/speculative-simple/speculative-simple.cpp
#	ggml/src/ggml-cuda/im2col.cu
#	ggml/src/ggml-opencl/CMakeLists.txt
#	ggml/src/ggml-opencl/ggml-opencl.cpp
#	ggml/src/ggml-opencl/kernels/cvt.cl
#	tests/test-backend-ops.cpp
#	tools/cli/README.md
#	tools/mtmd/CMakeLists.txt
#	tools/server/README.md
2026-05-12 20:47:07 +08:00
..
vulkan updated vulkan to make use of cm2 2025-04-18 22:10:57 +08:00
llama-cpp.h llama : re-enable manual LoRA adapter free (#19983) 2026-03-18 12:03:26 +02:00
llama.h need to fix cuda compile. Merge branch 'upstream' into concedo_experimental 2026-05-12 20:47:07 +08:00