koboldcpp/scripts
Concedo f73de33f74 Merge branch 'master' into concedo_experimental
# Conflicts:
#	.github/workflows/build.yml
#	.github/workflows/docker.yml
#	CMakeLists.txt
#	Makefile
#	README.md
#	ci/README.md
#	ci/run.sh
#	flake.lock
#	ggml-metal.m
#	ggml-opencl.cpp
#	ggml-vulkan-shaders.hpp
#	ggml-vulkan.cpp
#	ggml-vulkan.h
#	ggml.c
#	ggml_vk_generate_shaders.py
#	llama.cpp
#	llama.h
#	pocs/vdot/vdot.cpp
#	tests/test-llama-grammar.cpp
#	tests/test-sampling.cpp
2024-01-29 23:12:09 +08:00
..
check-requirements.sh python : add check-requirements.sh and GitHub workflow (#4585) 2023-12-29 16:50:29 +02:00
ci-run.sh ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
compare-llama-bench.py compare-llama-bench: tweak output format (#4910) 2024-01-13 15:52:53 +01:00
gen-build-info-cpp.cmake cmake : fix issue with version info not getting baked into LlamaConfig.cmake (#3970) 2023-11-27 21:25:42 +02:00
get-flags.mk build : detect host compiler and cuda compiler separately (#4414) 2023-12-13 12:10:10 -05:00
get-hellaswag.sh scripts : add get-winogrande.sh 2024-01-18 20:45:39 +02:00
get-pg.sh scripts : improve get-pg.sh (#4838) 2024-01-09 19:21:13 +02:00
get-winogrande.sh scripts : add get-winogrande.sh 2024-01-18 20:45:39 +02:00
run-with-preset.py scripts : move run-with-preset.py from root to scripts folder 2024-01-26 17:09:44 +02:00
server-llm.sh scripts : add server-llm.sh (#3868) 2023-11-01 11:29:07 +02:00
sync-ggml-am.sh scripts : sync-ggml-am.sh option to skip commits 2024-01-14 11:08:41 +02:00
sync-ggml.last sync : ggml 2024-01-28 19:48:05 +02:00