koboldcpp/scripts
Aman Gupta 81ab64f3c8
ggml-cuda: enable cuda-graphs for n-cpu-moe (#18934)
* ggml-cuda: add split-wise cuda graph

* add n-cpu-moe compare_llama_bench.py

* fix hip/musa builds
2026-01-24 14:25:20 +08:00
..
apple
jinja
snapdragon hexagon: support for OP_CPY, host buffers now optional, hvx-utils refactoring and optimizations (#18822) 2026-01-14 21:46:12 -08:00
bench-models.sh scripts : add script to bench models (#16894) 2025-11-02 00:15:31 +02:00
build-info.sh
check-requirements.sh
compare-commits.sh
compare-llama-bench.py ggml-cuda: enable cuda-graphs for n-cpu-moe (#18934) 2026-01-24 14:25:20 +08:00
compare-logprobs.py scripts: add script to compare logprobs of llama.cpp against other frameworks (#17947) 2025-12-13 22:33:29 +01:00
create_ops_docs.py
debug-test.sh refactor : remove libcurl, use OpenSSL when available (#18828) 2026-01-14 18:02:47 +01:00
fetch_server_test_models.py
gen-authors.sh
gen-unicode-data.py
get-flags.mk
get-hellaswag.sh
get-pg.sh
get-wikitext-2.sh
get-wikitext-103.sh
get-winogrande.sh
get_chat_template.py scripts: corrected encoding when getting chat template (#11866) (#11907) 2025-02-18 10:30:16 +01:00
hf.sh
install-oneapi.bat
pr2wt.sh scripts : follow api redirects in pr2wt.sh (#18739) 2026-01-10 16:04:05 +01:00
serve-static.js refactor : remove libcurl, use OpenSSL when available (#18828) 2026-01-14 18:02:47 +01:00
server-bench.py
sync-ggml-am.sh
sync-ggml.last sync : ggml 2025-12-31 18:54:43 +02:00
sync-ggml.sh
sync_vendor.py common : implement new jinja template engine (#18462) 2026-01-16 11:22:06 +01:00
tool_bench.py refactor : remove libcurl, use OpenSSL when available (#18828) 2026-01-14 18:02:47 +01:00
tool_bench.sh
verify-checksum-models.py
xxd.cmake