koboldcpp/ggml/include
Concedo 590553ef07 Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.devops/llama-cli-intel.Dockerfile
#	.devops/llama-server-intel.Dockerfile
#	.github/workflows/build.yml
#	CMakePresets.json
#	Makefile
#	docs/backend/SYCL.md
#	docs/build.md
#	ggml/CMakeLists.txt
#	ggml/src/ggml-cpu/CMakeLists.txt
#	scripts/compare-llama-bench.py
#	scripts/sync-ggml-am.sh
#	scripts/sync-ggml.last
2024-11-16 17:20:14 +08:00
..
ggml-alloc.h ggml : fix typo in example usage ggml_gallocr_new (ggml/984) 2024-10-04 18:50:05 +03:00
ggml-amx.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-backend.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-blas.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cann.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpp.h llama : use smart pointers for ggml resources (#10117) 2024-11-01 23:48:26 +01:00
ggml-cpu.h backend cpu: add online flow for aarch64 Q4_0 GEMV/GEMM kernels (#9921) 2024-11-15 01:28:50 +01:00
ggml-cuda.h attempts a backflip, but does he stick the landing? 2024-11-16 17:05:45 +08:00
ggml-kompute.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-metal.h attempts a backflip, but does he stick the landing? 2024-11-16 17:05:45 +08:00
ggml-rpc.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-sycl.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-vulkan.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml.h attempts a backflip, but does he stick the landing? 2024-11-16 17:05:45 +08:00