koboldcpp/examples
xiaofei a0f7016d17
rpc : fix cache directory initialization (#13188)
Signed-off-by: xiaofei <hbuxiaofei@gmail.com>
2025-04-30 09:29:22 +03:00
..
batched
batched-bench
batched.swift
convert-llama2c-to-ggml
cvector-generator
deprecation-warning
embedding embeddings : fix batch sizes (#13076) 2025-04-24 22:29:22 +03:00
eval-callback
export-lora
gen-docs
gguf
gguf-hash
gguf-split
gritlm
imatrix
infill
jeopardy
llama-bench llama-bench: fixed size of fields to correctly map to values (#13183) 2025-04-29 17:24:36 +02:00
llama.android
llama.swiftui
llava mtmd : add qwen2vl and qwen2.5vl (#13141) 2025-04-29 11:47:04 +02:00
lookahead
lookup
main main : Fix Ctrl+D/newline handling (#12951) 2025-04-18 22:02:55 +02:00
parallel
passkey
perplexity
quantize
retrieval
rpc rpc : fix cache directory initialization (#13188) 2025-04-30 09:29:22 +03:00
run
save-load-state
server server : Prefilling assistant message in openai compatible API (#13174) 2025-04-29 20:33:10 +02:00
simple
simple-chat
simple-cmake-pkg
speculative
speculative-simple
sycl
tokenize
tts
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt cmake : do not include ./src as public for libllama (#13062) 2025-04-24 16:00:10 +03:00
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py grammar : handle maxItems == 0 in JSON schema (#13117) 2025-04-26 10:10:20 +02:00
llama.vim
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh