.. |
baby-llama
|
|
|
batched
|
common : reimplement logging (#9418)
|
2024-09-15 20:46:12 +03:00 |
batched-bench
|
common : reimplement logging (#9418)
|
2024-09-15 20:46:12 +03:00 |
batched.swift
|
llama : llama_perf + option to disable timings during decode (#9355)
|
2024-09-13 09:53:38 +03:00 |
convert-llama2c-to-ggml
|
vocab : refactor tokenizer to reduce init overhead (#9449)
|
2024-09-28 15:10:58 +03:00 |
cvector-generator
|
metal : reduce command encoding overhead (#9698)
|
2024-10-01 16:00:25 +03:00 |
deprecation-warning
|
|
|
embedding
|
llama : add reranking support (#9510)
|
2024-09-28 17:42:03 +03:00 |
eval-callback
|
common : reimplement logging (#9418)
|
2024-09-15 20:46:12 +03:00 |
export-lora
|
common : reimplement logging (#9418)
|
2024-09-15 20:46:12 +03:00 |
gbnf-validator
|
|
|
gen-docs
|
server : add more env vars, improve gen-docs (#9635)
|
2024-09-25 14:05:13 +02:00 |
gguf
|
|
|
gguf-hash
|
|
|
gguf-split
|
gguf-split : improve --split and --merge logic (#9619)
|
2024-10-02 10:21:57 +03:00 |
gritlm
|
common : reimplement logging (#9418)
|
2024-09-15 20:46:12 +03:00 |
imatrix
|
imatrix : disable prompt escape by default (#9543)
|
2024-09-19 10:58:14 +03:00 |
infill
|
log : add CONT level for continuing previous log entry (#9610)
|
2024-09-24 10:15:35 +03:00 |
jeopardy
|
|
|
llama-bench
|
llama-bench: correct argument parsing error message (#9524)
|
2024-09-17 22:41:38 +02:00 |
llama.android
|
|
|
llama.swiftui
|
|
|
llava
|
metal : reduce command encoding overhead (#9698)
|
2024-10-01 16:00:25 +03:00 |
lookahead
|
common : reimplement logging (#9418)
|
2024-09-15 20:46:12 +03:00 |
lookup
|
common : reimplement logging (#9418)
|
2024-09-15 20:46:12 +03:00 |
main
|
Merge branch 'dev' into feat/auto-exit
|
2025-05-20 02:04:14 +08:00 |
main-cmake-pkg
|
|
|
parallel
|
common : reimplement logging (#9418)
|
2024-09-15 20:46:12 +03:00 |
passkey
|
common : reimplement logging (#9418)
|
2024-09-15 20:46:12 +03:00 |
perplexity
|
perplexity : remove extra new lines after chunks (#9596)
|
2024-09-23 11:28:02 +03:00 |
quantize
|
quantize : improve type name parsing (#9570)
|
2024-09-20 20:55:36 +02:00 |
quantize-stats
|
|
|
retrieval
|
common : reimplement logging (#9418)
|
2024-09-15 20:46:12 +03:00 |
rpc
|
rpc : enable vulkan (#9714)
|
2024-10-03 13:00:52 +03:00 |
save-load-state
|
|
|
server
|
fix context shifting
|
2025-05-19 16:58:35 +04:00 |
simple
|
common : reimplement logging (#9418)
|
2024-09-15 20:46:12 +03:00 |
speculative
|
sampling : avoid expensive softmax during greedy sampling (#9605)
|
2024-09-24 09:03:17 +03:00 |
sycl
|
[SYCL]set context default value to avoid memory issue, update guide (#9476)
|
2024-09-18 08:30:31 +08:00 |
tokenize
|
common : reimplement logging (#9418)
|
2024-09-15 20:46:12 +03:00 |
base-translate.sh
|
|
|
chat-13B.bat
|
|
|
chat-13B.sh
|
|
|
chat-persistent.sh
|
|
|
chat-vicuna.sh
|
|
|
chat.sh
|
|
|
CMakeLists.txt
|
examples : remove benchmark (#9704)
|
2024-10-02 10:14:44 +03:00 |
convert_legacy_llama.py
|
|
|
json_schema_pydantic_example.py
|
|
|
json_schema_to_grammar.py
|
|
|
llama.vim
|
|
|
llm.vim
|
|
|
Miku.sh
|
|
|
pydantic_models_to_grammar.py
|
|
|
pydantic_models_to_grammar_examples.py
|
|
|
reason-act.sh
|
|
|
regex_to_grammar.py
|
|
|
server-llama2-13B.sh
|
|
|
server_embd.py
|
|
|
ts-type-to-grammar.sh
|
|
|