koboldcpp/examples
2023-10-25 10:26:27 +03:00
..
baby-llama build : enable more non-default compiler warnings (#3200) 2023-09-28 17:41:44 -04:00
batched cuda : add batched cuBLAS GEMM for faster attention (#3749) 2023-10-24 16:48:37 +03:00
batched-bench batched-bench : print params at start 2023-10-25 10:26:27 +03:00
batched.swift speculative : add tree-based sampling example (#3624) 2023-10-18 16:21:57 +03:00
beam-search llama : remove token functions with context args in favor of model (#3720) 2023-10-23 22:40:03 +03:00
benchmark benchmark-matmult : do not use integer abs() on a float (#3277) 2023-09-20 12:06:08 -04:00
convert-llama2c-to-ggml gguf : support big endian platform (#3552) 2023-10-20 14:19:40 +03:00
embedding llama.cpp : split llama_context_params into model and context params (#3301) 2023-09-28 22:42:38 +03:00
export-lora train : finetune LORA (#2632) 2023-09-28 21:40:11 +03:00
finetune ggml : add context enumeration functions (#3605) 2023-10-13 12:23:10 +02:00
gguf check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
infill llama : remove token functions with context args in favor of model (#3720) 2023-10-23 22:40:03 +03:00
jeopardy parallel : add option to load external prompt file (#3416) 2023-10-06 16:16:38 +03:00
llama-bench llama : remove token functions with context args in favor of model (#3720) 2023-10-23 22:40:03 +03:00
llava llama : remove token functions with context args in favor of model (#3720) 2023-10-23 22:40:03 +03:00
main llama : remove token functions with context args in favor of model (#3720) 2023-10-23 22:40:03 +03:00
main-cmake-pkg cmake : add missed dependencies (#3763) 2023-10-24 20:48:45 +03:00
metal
parallel llama : remove token functions with context args in favor of model (#3720) 2023-10-23 22:40:03 +03:00
perplexity llama : remove token functions with context args in favor of model (#3720) 2023-10-23 22:40:03 +03:00
quantize build : enable more non-default compiler warnings (#3200) 2023-09-28 17:41:44 -04:00
quantize-stats llama.cpp : split llama_context_params into model and context params (#3301) 2023-09-28 22:42:38 +03:00
save-load-state save-load-state : fix example + add ci test (#3655) 2023-10-17 19:12:46 +03:00
server server : add parameter -tb N, --threads-batch N (#3584) (#3768) 2023-10-24 23:10:43 +03:00
simple llama : remove token functions with context args in favor of model (#3720) 2023-10-23 22:40:03 +03:00
speculative llama : remove token functions with context args in favor of model (#3720) 2023-10-23 22:40:03 +03:00
train-text-from-scratch train-text-from-scratch : fix assert failure in ggml-alloc (#3618) 2023-10-17 20:00:58 +03:00
alpaca.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh llama : fix session saving/loading (#3400) 2023-10-03 21:04:01 +03:00
chat-vicuna.sh
chat.sh main : log file (#2748) 2023-08-30 09:29:32 +03:00
CMakeLists.txt sampling : refactor init to use llama_sampling_params (#3696) 2023-10-20 21:07:23 +03:00
gpt4all.sh
json-schema-to-grammar.py chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
llama.vim
llama2-13b.sh
llama2.sh
llm.vim llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2023-08-30 09:50:55 +03:00
make-ggml.py make-ggml.py : compatibility with more models and GGUF (#3290) 2023-09-27 19:25:12 +03:00
Miku.sh
reason-act.sh chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
server-llama2-13B.sh chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00