koboldcpp/include
Concedo 5de51b77c1 Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.github/workflows/close-issue.yml
#	docs/build-s390x.md
#	examples/convert-llama2c-to-ggml/convert-llama2c-to-ggml.cpp
#	ggml/CMakeLists.txt
#	ggml/src/ggml-cann/ggml-cann.cpp
#	ggml/src/ggml-cpu/CMakeLists.txt
#	ggml/src/ggml-cpu/kleidiai/kleidiai.cpp
#	ggml/src/ggml-cuda/fattn-tile-f16.cu
#	ggml/src/ggml-cuda/fattn.cu
#	ggml/src/ggml-webgpu/ggml-webgpu.cpp
#	scripts/tool_bench.py
#	tests/test-backend-ops.cpp
#	tools/batched-bench/batched-bench.cpp
#	tools/server/README.md
2025-09-11 22:28:19 +08:00
..
CL wip dont use 2023-04-21 00:35:54 +08:00
vulkan updated vulkan to make use of cm2 2025-04-18 22:10:57 +08:00
cblas.h wip dont use 2023-04-21 00:35:54 +08:00
clblast.h Revert "clblast up ver" 2024-02-21 14:35:38 +08:00
clblast_c.h Revert "clblast up ver" 2024-02-21 14:35:38 +08:00
clblast_half.h upgraded clblast 2023-05-25 10:18:12 +08:00
clblast_netlib_c.h Not working, don't use. testing a merge 2023-05-16 12:33:24 +08:00
llama-cpp.h llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llama.h Merge branch 'upstream' into concedo_experimental 2025-09-11 22:28:19 +08:00
openblas_config.h wip dont use 2023-04-21 00:35:54 +08:00