..
batched
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
batched-bench
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
batched.swift
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
convert-llama2c-to-ggml
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
cvector-generator
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
deprecation-warning
Update deprecation-warning.cpp ( #10619 )
2024-12-04 23:19:20 +01:00
embedding
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
eval-callback
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
export-lora
export-lora : fix tok_embd tensor ( #11330 )
2025-01-21 14:07:12 +01:00
gbnf-validator
Tool call support (generic + native for Llama, Functionary, Hermes, Mistral, Firefunction, DeepSeek) w/ lazy grammars ( #9639 )
2025-01-30 19:13:58 +00:00
gen-docs
ggml : move AMX to the CPU backend ( #10570 )
2024-11-29 21:54:58 +01:00
gguf
GGUF: C++ refactor, backend support, misc fixes ( #11030 )
2025-01-07 18:01:58 +01:00
gguf-hash
GGUF: C++ refactor, backend support, misc fixes ( #11030 )
2025-01-07 18:01:58 +01:00
gguf-split
ci : use -no-cnv in gguf-split tests ( #11254 )
2025-01-15 18:28:35 +02:00
gritlm
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
imatrix
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
infill
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
jeopardy
llama-bench
rpc : early register backend devices ( #11262 )
2025-01-17 10:57:09 +02:00
llama.android
llama.android: add field formatChat to control whether to parse special tokens when send message ( #11270 )
2025-01-17 14:57:56 +02:00
llama.swiftui
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
llava
llava : support Minicpm-omni ( #11289 )
2025-01-22 09:35:48 +02:00
lookahead
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
lookup
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
main
Tool call support (generic + native for Llama, Functionary, Hermes, Mistral, Firefunction, DeepSeek) w/ lazy grammars ( #9639 )
2025-01-30 19:13:58 +00:00
parallel
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
passkey
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
perplexity
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
quantize
ci : use -no-cnv in gguf-split tests ( #11254 )
2025-01-15 18:28:35 +02:00
quantize-stats
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
retrieval
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
rpc
rpc-server : add support for the SYCL backend ( #10934 )
2024-12-23 10:39:30 +02:00
run
Parse https://ollama.com/library/ syntax ( #11480 )
2025-01-29 11:23:10 +00:00
save-load-state
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
server
server : update help metrics processing/deferred ( #11512 )
2025-01-31 06:04:53 +01:00
simple
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
simple-chat
Add Jinja template support ( #11016 )
2025-01-21 13:18:51 +00:00
simple-cmake-pkg
cmake: add ggml find package ( #11369 )
2025-01-26 12:07:48 -04:00
speculative
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
speculative-simple
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
sycl
[SYCL]set context default value to avoid memory issue, update guide ( #9476 )
2024-09-18 08:30:31 +08:00
tokenize
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
tts
tts : add guide tokens support ( #11186 )
2025-01-18 12:20:57 +02:00
chat-13B.bat
chat-13B.sh
chat-persistent.sh
scripts : fix pattern and get n_tokens in one go ( #10221 )
2024-11-09 09:06:54 +02:00
chat-vicuna.sh
chat.sh
CMakeLists.txt
tts : add OuteTTS support ( #10784 )
2024-12-18 19:27:21 +02:00
convert_legacy_llama.py
metadata: Detailed Dataset Authorship Metadata ( #8875 )
2024-11-13 21:10:38 +11:00
json_schema_pydantic_example.py
json_schema_to_grammar.py
grammar : fix JSON Schema for string regex with top-level alt. ( #9903 )
2024-10-16 19:03:24 +03:00
llama.vim
llama.vim : bump generation time limit to 3s [no ci]
2024-10-23 17:16:56 +03:00
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh