koboldcpp/ggml/src
2026-03-21 17:34:12 +08:00
..
ggml-blas Merge branch 'upstream' into concedo_experimental 2026-03-19 02:23:06 +08:00
ggml-cpu Merge branch 'upstream' into concedo_experimental 2026-03-21 12:06:01 +08:00
ggml-cuda Merge branch 'upstream' into concedo_experimental 2026-03-19 02:23:06 +08:00
ggml-metal Merge branch 'upstream' into concedo_experimental 2026-03-15 15:20:38 +08:00
ggml-vulkan added --sdmaingpu allowing image models to be independently placed on any gpu 2026-03-21 17:34:12 +08:00
ggml-alloc.c ggml : make ggml_is_view as API (#19539) 2026-02-16 17:43:34 +02:00
ggml-backend-dl.cpp hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-dl.h hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-impl.h llama: use host memory if device reports 0 memory (#18587) 2026-01-09 05:34:56 +08:00
ggml-backend-reg.cpp note: smartcache is broken for rnn currently 2026-03-15 11:31:47 +08:00
ggml-backend.cpp double n_batch size when pipeline parallel is enabled, keep u_batch the same 2026-03-21 11:22:10 +08:00
ggml-common.h ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
ggml-impl.h ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
ggml-opt.cpp
ggml-quants.c ggml : guard against sumq2 being 0 in IQ4_NL (#20460) 2026-03-15 10:47:28 +02:00
ggml-quants.h ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
ggml-threading.cpp
ggml-threading.h
ggml.c Merge branch 'upstream' into concedo_experimental 2026-03-19 02:23:06 +08:00
ggml.cpp
gguf.cpp Merge commit '1ca3d1de15' into concedo_experimental 2026-02-26 19:55:06 +08:00