koboldcpp/ggml/src
2026-04-03 11:07:46 +08:00
..
ggml-blas Merge branch 'upstream' into concedo_experimental 2026-03-19 02:23:06 +08:00
ggml-cpu Merge commit 'fbd441c379' into concedo_experimental 2026-04-03 01:06:02 +08:00
ggml-cuda Merge commit 'fbd441c379' into concedo_experimental 2026-04-03 01:06:02 +08:00
ggml-metal Merge branch 'upstream' into concedo_experimental 2026-03-30 20:45:38 +08:00
ggml-vulkan Merge branch 'upstream' into concedo_experimental 2026-03-30 20:45:38 +08:00
ggml-webgpu/wgsl-shaders merged support for gemma4. the e2b, e4b and 26b work, the 31b does not 2026-04-03 11:07:46 +08:00
ggml-alloc.c ggml : make ggml_is_view as API (#19539) 2026-02-16 17:43:34 +02:00
ggml-backend-dl.cpp hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-dl.h hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-impl.h llama: use host memory if device reports 0 memory (#18587) 2026-01-09 05:34:56 +08:00
ggml-backend-reg.cpp note: smartcache is broken for rnn currently 2026-03-15 11:31:47 +08:00
ggml-backend.cpp Revert "Revert "llama : disable graph reuse with pipeline parallelism (#20463)"" 2026-03-25 22:25:20 +08:00
ggml-common.h ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
ggml-impl.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
ggml-opt.cpp finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
ggml-quants.c ggml : guard against sumq2 being 0 in IQ4_NL (#20460) 2026-03-15 10:47:28 +02:00
ggml-quants.h ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
ggml-threading.cpp
ggml-threading.h
ggml.c Merge commit '0fac87b157' into concedo_experimental 2026-03-29 01:14:33 +08:00
ggml.cpp
gguf.cpp Merge branch 'upstream' into concedo_experimental 2026-03-28 01:18:20 +08:00