koboldcpp/ggml/src
Oliver Simons 1f1e57f2bf
CUDA: Fix loop unrolling for BW in mul_mat_q_stream_k_fixup (#19053)
By providing stride_* variables as size_t (i.e., 64-bit) the compiler can
correctly unroll the [two for-loops](557515be1e/ggml/src/ggml-cuda/mmq.cuh (L3789-L3816))
on BW. This gives some perf for prefill/pp phase on BW, while not affecting
other SMs:

| GPU                                                     | Model                 | Test   |   t/s master |   t/s osimons/fix_bw_mmq_fixup_kernel |   Speedup |
|:--------------------------------------------------------|:----------------------|:-------|-------------:|--------------------------------------:|----------:|
| NVIDIA RTX 6000 Ada Generation                          | gpt-oss 20B MXFP4 MoE | pp8096 |      8404.05 |                               8375.79 |      1.00 |
| NVIDIA RTX 6000 Ada Generation                          | llama 3B Q4_K_M       | pp8096 |     16148.93 |                              16019.60 |      0.99 |
| NVIDIA RTX 6000 Ada Generation                          | llama 8B Q4_0         | pp8096 |      8008.29 |                               7978.80 |      1.00 |
| NVIDIA RTX 6000 Ada Generation                          | nemotron_h 9B BF16    | pp8096 |      4263.16 |                               4248.53 |      1.00 |
| NVIDIA RTX 6000 Ada Generation                          | nemotron_h 9B Q4_K_M  | pp8096 |      5165.11 |                               5157.43 |      1.00 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | gpt-oss 20B MXFP4 MoE | pp8096 |     12582.80 |                              12758.37 |      1.01 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | llama 3B Q4_K_M       | pp8096 |     16879.10 |                              17619.47 |      1.04 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | llama 8B Q4_0         | pp8096 |     10649.90 |                              10982.65 |      1.03 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | nemotron_h 9B BF16    | pp8096 |      7717.73 |                               7716.22 |      1.00 |
| NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition | nemotron_h 9B Q4_K_M  | pp8096 |      7301.90 |                               7370.38 |      1.01 |
2026-02-03 11:33:14 +01:00
..
ggml-blas ggml : add ggml_build_forward_select (#18550) 2026-01-19 20:03:19 +02:00
ggml-cann docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
ggml-cpu ggml-cpu: FA split across kv for faster TG (#19209) 2026-02-03 01:19:55 +08:00
ggml-cuda CUDA: Fix loop unrolling for BW in mul_mat_q_stream_k_fixup (#19053) 2026-02-03 11:33:14 +01:00
ggml-hexagon ggml-hexagon: flash-attention and reduce-sum optimizations (#19141) 2026-01-30 21:14:20 -08:00
ggml-hip HIP: add mmf for CDNA (#18896) 2026-01-29 11:10:53 +01:00
ggml-metal metal : support virtual devices (#18919) 2026-02-02 14:29:44 +02:00
ggml-musa
ggml-opencl opencl: refactor some ops, concat, repeat, tanh and scale (#19226) 2026-02-02 15:54:43 -08:00
ggml-rpc rpc : use unordered_map::reserve and emplace (#18513) 2026-01-02 12:09:36 +02:00
ggml-sycl Remove support for Nvidia & AMD GPU, because the oneAPI plugin for Nvidia & AMD GPU is unavailable: download/installation channels are out of work. (#19246) 2026-02-02 21:06:21 +08:00
ggml-virtgpu ggml: new backend for Virglrenderer API Remoting acceleration (v2) (#18718) 2026-01-28 17:49:40 +08:00
ggml-vulkan docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
ggml-webgpu Remove pipeline cache mutexes (#19195) 2026-02-01 18:47:29 -08:00
ggml-zdnn ggml-zdnn : mark zDNN buffers as non-host (#18967) 2026-01-22 01:16:21 +01:00
ggml-zendnn ggml-zendnn : resolve ZenDNN backend cross-module symbol dependency (#19159) 2026-01-29 12:28:57 +08:00
CMakeLists.txt hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-alloc.c llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 2025-12-15 09:24:59 +01:00
ggml-backend-dl.cpp hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-dl.h hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-impl.h llama: use host memory if device reports 0 memory (#18587) 2026-01-09 05:34:56 +08:00
ggml-backend-reg.cpp hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend.cpp ggml-backend: fix async set/get fallback sync (#19179) 2026-02-02 10:00:05 +01:00
ggml-common.h
ggml-impl.h ggml : add ggml_build_forward_select (#18550) 2026-01-19 20:03:19 +02:00
ggml-opt.cpp
ggml-quants.c
ggml-quants.h
ggml-threading.cpp
ggml-threading.h
ggml.c ggml: added cleanups in ggml_quantize_free (#19278) 2026-02-03 08:43:39 +02:00
ggml.cpp
gguf.cpp GGUF: check that tensor size is representable (#19072) 2026-01-24 21:57:51 +01:00