Jeff Bolz
0090950f67
vulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory ( #12833 )
...
q4_k and q5_k had a lot of redundant global loads where the same 16B of
scale information is repeatedly loaded and decoded during each loop iteration.
This change restructures the loops to more explicitly iterate over whole
blocks in the outer loop (with unrolled inner loop) and to copy/decode the
scale data into shared memory once at the start of each outer loop. The copy
is pipelined so the scale load from global memory is relatively cheap.
This improves q4_k/q5_k model prompt processing performance by around 5-7%.
I briefly tried applying this to q6_k and q4_0, and it didn't help for q6_k
and hurt for q4_0.
The big "else" path in mul_mm_cm2.comp that had all the clamped/unclamped
variants isn't used as often as it originally was (e.g. due to the padded_N
change), so I trimmed it down to offset some of the new complexity of the
semi-manual loop unrolling.
2025-04-09 07:25:08 +02:00
Concedo
b99ee451f8
Merge commit ' 4ccea213bc
' into concedo_experimental
...
# Conflicts:
# .devops/cpu.Dockerfile
# .devops/cuda.Dockerfile
# .devops/intel.Dockerfile
# .devops/musa.Dockerfile
# .devops/rocm.Dockerfile
# .github/workflows/bench.yml.disabled
# .github/workflows/build.yml
# .github/workflows/server.yml
# CMakeLists.txt
# build-xcframework.sh
# ci/run.sh
# common/CMakeLists.txt
# examples/llama.android/llama/build.gradle.kts
# examples/perplexity/perplexity.cpp
# examples/run/CMakeLists.txt
# examples/server/tests/README.md
# examples/sycl/win-build-sycl.bat
# ggml/src/ggml-cann/aclnn_ops.cpp
# ggml/src/ggml-cann/aclnn_ops.h
# ggml/src/ggml-cpu/CMakeLists.txt
# ggml/src/ggml-cpu/ggml-cpu.c
# licenses/LICENSE-linenoise
# scripts/sync-ggml.last
# tests/CMakeLists.txt
2025-04-08 21:26:23 +08:00
Jeff Bolz
80b717d493
vulkan: Use unclamped loads for flash attention mask ( #12720 )
...
nem1 must be a multiple of GGML_KQ_MASK_PAD, and GGML_KQ_MASK_PAD is a multiple
of the number of rows in the matrix. The KV dim is a multiple of the number of
columns for the aligned shader.
2025-04-06 10:47:13 +02:00
0cc4m
6bf28f0111
Vulkan: Tune Vulkan mmq int dot shader for performance ( #12767 )
2025-04-05 18:04:03 +02:00
Concedo
c92b0cf1a4
early merge https://github.com/ggml-org/llama.cpp/pull/12767 to test
2025-04-05 18:48:57 +08:00
Concedo
4e740311fe
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# ci/run.sh
# docs/backend/SYCL.md
# docs/build.md
# ggml/src/ggml-vulkan/CMakeLists.txt
# ggml/src/ggml-vulkan/vulkan-shaders/CMakeLists.txt
# tests/test-chat-template.cpp
2025-04-04 15:07:47 +08:00
Jeff Bolz
74d4f5b041
vulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency ( #12630 )
...
There seems to be a bubble waking up from waitForFences, which costs a few
percent performance and also increased variance in performance. This change
inserts an "almost_ready" fence when the graph is about 80% complete and we
waitForFences for the almost_ready fence and then spin (with _mm_pauses) waiting
for the final fence to be signaled.
2025-04-04 07:54:35 +02:00
Concedo
8c74520586
added NO_VULKAN_EXTENSIONS flag to disable dp4a and coopmat if needed
2025-04-03 20:51:17 +08:00
Concedo
103d60ed2c
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# common/common.cpp
# examples/batched-bench/batched-bench.cpp
# examples/batched/batched.cpp
# examples/export-lora/export-lora.cpp
# examples/gritlm/gritlm.cpp
# examples/parallel/parallel.cpp
# examples/passkey/passkey.cpp
# examples/speculative-simple/speculative-simple.cpp
# examples/speculative/speculative.cpp
# ggml/src/ggml-cann/CMakeLists.txt
# ggml/src/ggml-cann/acl_tensor.cpp
# ggml/src/ggml-cann/acl_tensor.h
# ggml/src/ggml-cann/aclnn_ops.cpp
# ggml/src/ggml-cann/aclnn_ops.h
# ggml/src/ggml-vulkan/CMakeLists.txt
# tests/test-arg-parser.cpp
# tests/test-backend-ops.cpp
2025-04-03 18:57:49 +08:00
Jeff Bolz
f01bd02376
vulkan: Implement split_k for coopmat2 flash attention. ( #12627 )
...
When using group query attention, we have one workgroup per KV batch and this
can be very few workgroups (e.g. just 8 in some models). Enable split_k to
spread the work across SMs. This helps a lot when the KV cache is large.
2025-04-02 14:25:08 -05:00
Jeff Bolz
be0a0f8cae
vulkan: Implement grouped query attention in the coopmat2 FA shader ( #12559 )
...
When adjacent batches of Q share the same batches of K/V, batch them into
the same workgroup. For example, when:
dst(128,32,1,1) = FA(q(128,1,32,1), k(128,16640,8,1), v(128,16640,8,1))
previously we would run 32 workgroups computing 1 result each, now we will
run 8 workgroups computing 4 results each.
This doesn't directly translate to better performance (at least when you have
>=32 SMs), but in a subsequent change I'll enable split_k which will scale much
better with 4x fewer workgroups.
2025-04-02 19:40:32 +02:00
Concedo
8a4a9b8c19
Merge branch 'upstream' into concedo_experimental
2025-04-01 20:16:16 +08:00
Concedo
9e182b3e78
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .github/workflows/build.yml
# README.md
# docs/backend/SYCL.md
# ggml/src/ggml-sycl/CMakeLists.txt
# ggml/src/ggml-vulkan/CMakeLists.txt
# ggml/src/ggml-vulkan/ggml-vulkan.cpp
# scripts/sync-ggml.last
# tests/test-chat-template.cpp
2025-04-01 20:16:07 +08:00
Wagner Bruna
2bb3597e42
vulkan: fix build when glslc doesn't support coopmat ( #12683 )
2025-04-01 11:38:07 +02:00
0cc4m
a8a1f33567
Vulkan: Add DP4A MMQ and Q8_1 quantization shader ( #12135 )
...
* Vulkan: Add DP4A MMQ and Q8_1 quantization shader
* Add q4_0 x q8_1 matrix matrix multiplication support
* Vulkan: Add int8 coopmat MMQ support
* Vulkan: Add q4_1, q5_0 and q5_1 quants, improve integer dot code
* Add GL_EXT_integer_dot_product check
* Remove ggml changes, fix mmq pipeline picker
* Remove ggml changes, restore Intel coopmat behaviour
* Fix glsl compile attempt when integer vec dot is not supported
* Remove redundant code, use non-saturating integer dot, enable all matmul sizes for mmq
* Remove redundant comment
* Fix integer dot check
* Fix compile issue with unsupported int dot glslc
* Update Windows build Vulkan SDK version
2025-03-31 14:37:01 +02:00
Concedo
396875e1c4
update api docs and lite
2025-03-29 15:39:25 +08:00
Georgi Gerganov
b4ae50810e
metal : improve FA + improve MoE ( #12612 )
...
* ggml : FA with different K, V head sizes (CPU)
ggml-ci
* metal : add FA with HS=192
* metal : extend FA to support different K and V head sizes
ggml-ci
* metal : add FA vector kernels for heads K 192 and V 128
ggml-ci
* ggml : restrict op on other backends to equal head sizes
ggml-ci
* metal : optimize FA-vec kernel
ggml-ci
* metal : FA remove mq registers
* metal : improve MoE mul_mat_id condition
ggml-ci
* metal : fix comments + remove unnecessary addition
ggml-ci
* metal : avoid too much shared memory usage with mul_mat_id
ggml-ci
2025-03-28 20:21:59 +02:00
Concedo
ea358369cc
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# ci/README.md
# ci/run.sh
# docs/backend/CUDA-FEDORA.md
# docs/build.md
# docs/install.md
# ggml/src/ggml-cpu/CMakeLists.txt
# ggml/src/ggml-cuda/common.cuh
# tests/test-backend-ops.cpp
2025-03-26 00:18:01 +08:00
Jeff Bolz
eddfb43850
vulkan: Optimize mul_mat_vec p021 and nc shaders ( #12505 )
...
* tests: add mul_mat perf/functional tests for p021/nc vulkan shaders
* vulkan: Optimize mul_mat_vec p021 and nc shaders.
These shaders are used in attention calculations, and when the KV cache grows
large they start to dominate the run time. For the nc shader (which is called
with large 'k' dimension), use unrolling and vector loads. For the p021 shader
(which is called with large 'm' and small 'k' dimensions), take advantage of
grouped query attention to reuse loads from the A matrix for the whole group,
and reduce the number of workgroups (too much overhead from tiny dispatches).
Using subgroupAdd in the p021 shader also helps, use that conditionally.
2025-03-22 09:40:11 +01:00
Concedo
27bd7b95f5
Merge branch 'upstream' into concedo_experimental
2025-03-22 09:42:33 +08:00
stduhpf
4375415b4a
Vulkan: RTE rounding for cpy to quant ( #12480 )
...
* Vulkan: RTE rounding for cpy to quant
Co-Authored-By: Jeff Bolz <jbolz@nvidia.com>
* remove trailing whitespace
* avoid duplicating pipeline_cpy_f32_quant
* fix copypasting issue
* remove duplicated code
---------
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
2025-03-21 20:34:50 +01:00
Concedo
0c90d2ebcf
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .github/workflows/build.yml
# CMakeLists.txt
# cmake/common.cmake
# docs/backend/SYCL.md
# examples/main/README.md
# examples/speculative/speculative.cpp
# ggml/CMakeLists.txt
# ggml/src/CMakeLists.txt
# ggml/src/ggml-cpu/CMakeLists.txt
# ggml/src/ggml-musa/CMakeLists.txt
# ggml/src/ggml-sycl/CMakeLists.txt
# ggml/src/ggml-vulkan/vulkan-shaders/CMakeLists.txt
# tests/test-backend-ops.cpp
2025-03-19 19:27:11 +08:00
Jeff Bolz
c446b2edd2
vulkan: Submit once enough matmul work has been recorded ( #12406 )
...
I've been seeing significantly worse performance for tg with flash attention
enabled vs disabled, and it seems to be related to the submit heuristic.
Change the heuristic to check how many bytes worth of weight matrix are
used and flush every 100MB, and ramp up after the first few submits.
This seems to resolve the issue, and also increases perf for non-FA a bit.
2025-03-19 08:26:26 +01:00
0cc4m
fd123cfead
Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentation and driver issues ( #12434 )
2025-03-18 07:21:40 +01:00
Molly Sophia
7dfad387e3
llama: Add support for RWKV v7 architecture ( #12412 )
...
* ggml: Add op l2_norm
Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
* ggml: Add op rwkv_wkv7
Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
* llama: Add support for RWKV7 and ARWKV7 models
Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
* llama: fix inference with RWKV6Qwen2
Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
* llama: add more (a)rwkv7 variants in size
Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
* Apply code-format changes
Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
* fix MUSA build
Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
* llama: fix shape error with rwkv using llama-parallel
Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
---------
Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
2025-03-18 07:27:50 +08:00
Jeff Bolz
484a8ab513
vulkan: Add N/2 and N/4 optimized paths in coopmat2 shader ( #12312 )
2025-03-17 09:26:18 -05:00
Daniele
cf2270e4d3
vulkan: subgroup size tuning ( #12087 )
...
* vulkan: subgroup size test
* Vulkan: Add device architecture enum and logic to recognize AMD generations
* vulkan: use new architecture logic to specify subgroup size
* Initial vulkan subgroup size tuning for RDNA3
* vulkan: commonize RDNA subgroup tuning
* vulkan: override subgroup size if required_subgroup_size = 0
* vulkan: disable warp 32 for RDNA3
* vulkan: fine tuned RDNA1 subgroup sizes
* vulkan: adjusted subgroup size map
* vulkan: fixed RDNA2 subgroup map
---------
Co-authored-by: 0cc4m <picard12@live.de>
2025-03-17 12:42:33 +01:00
Jeff Bolz
891c63956d
vulkan: Pad N dimension of B matrix for coopmat2 perf, to avoid bounds checking ( #12273 )
...
* vulkan: Pad N dimension of B matrix for coopmat2 perf, to avoid bounds checking
2025-03-17 10:41:59 +01:00
Jeff Bolz
2f21123c1d
vulkan: Adjust coopmat2 tile sizes and selection heuristic ( #12258 )
2025-03-17 10:35:00 +01:00
Concedo
0cddbe1f0b
Merge branch 'upstream' into concedo_experimental
2025-03-05 00:22:06 +08:00
Concedo
6b7d2349a7
Rewrite history to fix bad vulkan shader commits without increasing repo size
...
added dpe colab (+8 squashed commit)
Squashed commit:
[b8362da4] updated lite
[ed6c037d] move nsigma into the regular sampler stack
[ac5f61c6] relative filepath fixed
[05fe96ab] export template
[ed0a5a3e] nix_example.md: refactor (#1401 )
* nix_example.md: add override example
* nix_example.md: drop graphics example, already basic nixos knowledge
* nix_example.md: format
* nix_example.md: Vulkan is disabled on macOS
Disabled in: 1ccd253acc
* nix_examples.md: nixpkgs.config.cuda{Arches -> Capabilities}
Fixes: https://github.com/LostRuins/koboldcpp/issues/1367
[675c62f7] AutoGuess: Phi 4 (mini) (#1402 )
[4bf56982
] phrasing
[b8c0df04
] Add Rep Pen to Top N Sigma sampler chain (#1397 )
- place after nsigma and before xtc (+3 squashed commit)
Squashed commit:
[87c52b97
] disable VMM from HIP
[ee8906f3
] edit description
[e85c0e69
] Remove Unnecessary Rep Counting (#1394 )
* stop counting reps
* fix range-based initializer
* strike that - reverse it
2025-03-05 00:02:20 +08:00
cmdr2
0cbee131ad
cuda/vulkan: specify fp32-only support for some operations in supports_op (ggml/1129)
...
ggml-ci
2025-03-03 18:18:11 +02:00
William Tambellini
70680c48e5
ggml : upgrade init_tensor API to return a ggml_status ( #11854 )
...
* Upgrade init_tensor API to return a ggml_status
To prepare for an 'abort-free' ggml
(ggml not to abort on OOMs but return a OOM status),
as agreeed with Diego in the ggml repo,
upgrade the init_tensor() and view_init() APIs
to return a ggml_status.
* misc fixes
---------
Co-authored-by: slaren <slarengh@gmail.com>
2025-02-28 14:41:47 +01:00
Rémy O
438a83926a
vulkan: add specific MMV kernels for IQ2 and IQ3 quants + optimizations ( #11595 )
...
* vulkan: implement specialized MMV kernels for IQ2 quantizations
* vulkan: add MMV kernels for IQ3 quants
* vulkan: Increase MMV batch size and unroll IQ LUT setup
* vulkan: fix init_iq_shmem for WG sizes larger than tables
* vulkan: common batch size for all I-quants
2025-02-28 09:42:52 +01:00
Jeff Bolz
a82c9e7c23
vulkan: fix assertion when qy_needs_dequant ( #12068 )
...
Looks like a copy/paste bug from qx_needs_dequant.
2025-02-25 16:30:21 +01:00
Judd
c132239bfb
add OP sigmoid ( #12056 )
...
Co-authored-by: Judd <foldl@boxvest.com>
2025-02-25 12:32:20 +01:00
Rémy O
61d4f39dfe
vulkan: implement more backpropagation operators ( #11914 )
...
* vulkan: implement GGML_OP_ROPE_BACK
* vulkan: implement GGML_OP_RMS_NORM_BACK
* vulkan: implement GGML_OP_SILU_BACK
* vulkan: implement GGML_OP_SOFTMAX_BACK
2025-02-25 12:04:45 +01:00
Concedo
6d7ef10671
Merge branch 'upstream' into concedo_experimental
...
Renable qwen2vl GPU for vulkan https://github.com/ggml-org/llama.cpp/pull/11902
# Conflicts:
# .github/workflows/build.yml
# .github/workflows/docker.yml
# .gitignore
# CONTRIBUTING.md
# Makefile
# common/CMakeLists.txt
# common/arg.cpp
# common/common.cpp
# examples/main/main.cpp
# examples/run/run.cpp
# examples/server/tests/README.md
# ggml/src/ggml-cuda/mma.cuh
# scripts/get_chat_template.py
# tests/test-backend-ops.cpp
# tests/test-chat-template.cpp
# tests/test-chat.cpp
2025-02-20 23:17:20 +08:00
Rémy O
2eea03d86a
vulkan: implement several ops relevant for ggml_opt ( #11769 )
...
* vulkan: support memset_tensor
* vulkan: support GGML_OP_SUM
* vulkan: implement GGML_OP_ARGMAX
* vulkan: implement GGML_OP_SUB
* vulkan: implement GGML_OP_COUNT_EQUAL
* vulkan: implement GGML_OP_OPT_STEP_ADAMW
* vulkan: fix check_results RWKV_WKV6 crash and memory leaks
* vulkan: implement GGML_OP_REPEAT_BACK
* tests: remove invalid test-backend-ops REPEAT_BACK tests
* vulkan: fix COUNT_EQUAL memset using a fillBuffer command
2025-02-17 07:55:57 +01:00
Jeff Bolz
bf42a23d0a
vulkan: support multi/vision rope, and noncontiguous rope ( #11902 )
2025-02-16 08:52:23 +01:00
Concedo
f144b1f345
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .devops/llama-cpp-cuda.srpm.spec
# .devops/llama-cpp.srpm.spec
# .devops/nix/package.nix
# .devops/rocm.Dockerfile
# .github/ISSUE_TEMPLATE/020-enhancement.yml
# .github/ISSUE_TEMPLATE/030-research.yml
# .github/ISSUE_TEMPLATE/040-refactor.yml
# .github/ISSUE_TEMPLATE/config.yml
# .github/pull_request_template.md
# .github/workflows/bench.yml.disabled
# .github/workflows/build.yml
# .github/workflows/labeler.yml
# CONTRIBUTING.md
# Makefile
# README.md
# SECURITY.md
# ci/README.md
# common/CMakeLists.txt
# docs/android.md
# docs/backend/SYCL.md
# docs/build.md
# docs/cuda-fedora.md
# docs/development/HOWTO-add-model.md
# docs/docker.md
# docs/install.md
# docs/llguidance.md
# examples/cvector-generator/README.md
# examples/imatrix/README.md
# examples/imatrix/imatrix.cpp
# examples/llama.android/llama/src/main/cpp/CMakeLists.txt
# examples/llama.swiftui/README.md
# examples/llama.vim
# examples/lookahead/README.md
# examples/lookup/README.md
# examples/main/README.md
# examples/passkey/README.md
# examples/pydantic_models_to_grammar_examples.py
# examples/retrieval/README.md
# examples/server/CMakeLists.txt
# examples/server/README.md
# examples/simple-cmake-pkg/README.md
# examples/speculative/README.md
# flake.nix
# grammars/README.md
# pyproject.toml
# scripts/check-requirements.sh
2025-02-16 02:08:39 +08:00
Rémy O
fc1b0d0936
vulkan: initial support for IQ1_S and IQ1_M quantizations ( #11528 )
...
* vulkan: initial support for IQ1_S and IQ1_M quantizations
* vulkan: define MMV kernels for IQ1 quantizations
* devops: increase timeout of Vulkan tests again
* vulkan: simplify ifdef for init_iq_shmem
2025-02-15 09:01:40 +01:00
Concedo
754fef5204
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .devops/cuda.Dockerfile
# .devops/musa.Dockerfile
# .github/workflows/build.yml
# README.md
# docs/docker.md
# examples/imatrix/imatrix.cpp
# examples/llama-bench/llama-bench.cpp
# examples/main/README.md
# examples/perplexity/perplexity.cpp
# examples/server/README.md
# ggml/src/ggml-cpu/ggml-cpu.c
# ggml/src/ggml-cuda/CMakeLists.txt
# models/templates/deepseek-ai-DeepSeek-R1-Distill-Llama-8B.jinja
# models/templates/deepseek-ai-DeepSeek-R1-Distill-Qwen-32B.jinja
# scripts/get_chat_template.py
# scripts/sync-ggml.last
# tests/test-chat.cpp
# tests/test-gguf.cpp
# tests/test-sampling.cpp
2025-02-15 00:49:46 +08:00
Concedo
39fad991cc
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# README.md
# examples/main/README.md
# examples/run/run.cpp
2025-02-14 11:34:29 +08:00
Eve
a4f011e8d0
vulkan: linux builds + small subgroup size fixes ( #11767 )
...
* mm subgroup size
* upload vulkan x86 builds
2025-02-14 02:59:40 +00:00
Danny Milosavljevic
c2a67efe38
vulkan: Make Vulkan optional at runtime ( #11493 ). ( #11494 )
...
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
2025-02-10 07:17:21 +01:00
Wagner Bruna
b044a0fe3c
vulkan: add environment variable GGML_VK_PREFER_HOST_MEMORY to avoid VRAM allocation ( #11592 )
2025-02-10 07:08:22 +01:00
Jeff Bolz
98f6b0fd1e
vulkan: account for lookup tables when checking shared memory size ( #11502 )
2025-02-09 08:43:51 +01:00
Concedo
27b9358baf
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# examples/run/run.cpp
# scripts/sync-ggml.last
2025-02-08 01:31:49 +08:00
Jeff Bolz
c026ba3c23
vulkan: print shared memory size ( #11719 )
2025-02-07 11:26:03 +01:00