mirror of
https://github.com/LostRuins/koboldcpp.git
synced 2025-09-16 03:49:42 +00:00
* Factor out `reduce_rows_f32` from common.cuh This increases iteration cycle speed by not having to recompile every kernel all the time * Hide memory-latency by loop unrolling in reduce_rows_f32 * Further optimizations to `reduce_rows_f32` 1. Increase threadblock size to better hide latency of memory requests. As a consequence of bigger threadblocks, do 2-step summation, using shared memory to communicate results between invocations 2. Use sum_temp array to reduce waits on sum 3. Adjust num_unroll to reflext bigger threadblock 4. Improve default block_dims, increase support for more block_dims * Add perf tests for `reduce_rows_f32` kernel * Add heuristic to toggle 128/512 threads based on sm count Break even point was the minimum of the following multiples. | GPU Model | Nrow SM Count Multiple | | ----------- | ----------- | | RTX 4000 SFF ADA | 2.0x | | RTX 6000 ADA | 2.5x | | RTX PRO 6000 Blackwell Max-Q | 3.04x | | RTX PRO 4500 Blackwell | 3.15x | * Ensure perf gains also for small ncols and large nrows Alternative to this, one could have also made the number of unrollings template-able, but that would require compiling the kernel multiple times, increasing binary size unnecessarily * Modify perf and unit-tests * Apply auto-formatting by clang * Fix CI build failure See https://github.com/ggml-org/llama.cpp/actions/runs/16798370266/job/47573716079?pr=15132#step:7:486 Building with VS generator worked though. * Remove sm_count property from `ggml_backend_cuda_context` Requested by @JohannesGaessler, and should fix remaining CI issues as a side-effect * Add CUB-based implementation for GGML_OP_MEAN Currently this branch is only executed for nrows==1 * Add heuristics to execute CUB branch only when it brings perf Heuristics were determined on the following HW: * RTX 4000 SFF ADA * RTX 6000 ADA * RTX PRO 6000 Blackwell Max-Q * RTX PRO 4500 Blackwell * Add unit-test for CUB-based mean Tests should run with CUDA Graphs enabled per default on NVGPUs * Rename `USE_CUB` to `GGML_CUDA_USE_CUB` Suggested by @JohannesGaessler * Unindent Preprocessor directives See https://github.com/ggml-org/llama.cpp/pull/15132#discussion_r2269213506 |
||
---|---|---|
.. | ||
.gitignore | ||
CMakeLists.txt | ||
get-model.cpp | ||
get-model.h | ||
run-json-schema-to-grammar.mjs | ||
test-arg-parser.cpp | ||
test-autorelease.cpp | ||
test-backend-ops.cpp | ||
test-barrier.cpp | ||
test-c.c | ||
test-chat-parser.cpp | ||
test-chat-template.cpp | ||
test-chat.cpp | ||
test-double-float.cpp | ||
test-gbnf-validator.cpp | ||
test-gguf.cpp | ||
test-grammar-integration.cpp | ||
test-grammar-llguidance.cpp | ||
test-grammar-parser.cpp | ||
test-json-partial.cpp | ||
test-json-schema-to-grammar.cpp | ||
test-llama-grammar.cpp | ||
test-log.cpp | ||
test-lora-conversion-inference.sh | ||
test-model-load-cancel.cpp | ||
test-mtmd-c-api.c | ||
test-opt.cpp | ||
test-quantize-fns.cpp | ||
test-quantize-perf.cpp | ||
test-quantize-stats.cpp | ||
test-regex-partial.cpp | ||
test-rope.cpp | ||
test-sampling.cpp | ||
test-thread-safety.cpp | ||
test-tokenizer-0.cpp | ||
test-tokenizer-0.py | ||
test-tokenizer-0.sh | ||
test-tokenizer-1-bpe.cpp | ||
test-tokenizer-1-spm.cpp | ||
test-tokenizer-random.py | ||
test-tokenizers-repo.sh |