| .. |
|
models
|
model: move load_hparams and load_tensors to per-model definition (#22004)
|
2026-05-04 12:36:59 +02:00 |
|
CMakeLists.txt
|
cmake: use glob to collect src/models sources (#22005)
|
2026-04-16 23:25:16 +02:00 |
|
llama-adapter.cpp
|
fix: correct misspellings in code comments (#21217)
|
2026-03-31 13:50:51 +02:00 |
|
llama-adapter.h
|
llama : re-enable manual LoRA adapter free (#19983)
|
2026-03-18 12:03:26 +02:00 |
|
llama-arch.cpp
|
mtmd, llama : Update HunyuanVL vision-language model support (#22037)
|
2026-04-22 11:58:43 +02:00 |
|
llama-arch.h
|
mtmd, llama : Update HunyuanVL vision-language model support (#22037)
|
2026-04-22 11:58:43 +02:00 |
|
llama-batch.cpp
|
kv-cache : fix M-RoPE checkpoints (#20132)
|
2026-03-06 08:46:51 +02:00 |
|
llama-batch.h
|
fix: correct misspellings in code comments (#21217)
|
2026-03-31 13:50:51 +02:00 |
|
llama-chat.cpp
|
model : add HunyuanOCR support (#21395)
|
2026-04-05 23:32:14 +02:00 |
|
llama-chat.h
|
model : add HunyuanOCR support (#21395)
|
2026-04-05 23:32:14 +02:00 |
|
llama-context.cpp
|
llama : add option to save memory in device buffers (#22679)
|
2026-05-05 06:35:07 +03:00 |
|
llama-context.h
|
llama : add option to save memory in device buffers (#22679)
|
2026-05-05 06:35:07 +03:00 |
|
llama-cparams.cpp
|
|
|
|
llama-cparams.h
|
llama : enable chunked fused GDN path (#20340)
|
2026-03-11 22:46:40 +02:00 |
|
llama-ext.h
|
llama-ext : fix exports (#22202)
|
2026-04-21 11:04:46 +03:00 |
|
llama-grammar.cpp
|
common/grammar: fix grammar parsing issues to prevent stack overflow and hangs (#18604)
|
2026-03-21 18:43:35 +01:00 |
|
llama-grammar.h
|
|
|
|
llama-graph.cpp
|
graph : handle non-contiguous Q/K/V in mul_mat_aux (#22630)
|
2026-05-05 06:34:44 +03:00 |
|
llama-graph.h
|
model : refactor QKV into common build_qkv and create_tensor_qkv helpers (#21245)
|
2026-04-16 17:41:34 +02:00 |
|
llama-hparams.cpp
|
llama: dynamic head_dim and n_rot for SWA (#20301)
|
2026-03-09 22:22:39 +01:00 |
|
llama-hparams.h
|
mtmd, llama : Update HunyuanVL vision-language model support (#22037)
|
2026-04-22 11:58:43 +02:00 |
|
llama-impl.cpp
|
llama : correct platform-independent loading of BOOL metadata (#21428)
|
2026-04-06 01:40:38 +02:00 |
|
llama-impl.h
|
llama : enable chunked fused GDN path (#20340)
|
2026-03-11 22:46:40 +02:00 |
|
llama-io.cpp
|
server : avoid checkpoint data host copies (#22558)
|
2026-05-02 18:03:25 +03:00 |
|
llama-io.h
|
llama : add option to save memory in device buffers (#22679)
|
2026-05-05 06:35:07 +03:00 |
|
llama-kv-cache-iswa.cpp
|
(revert) kv-cache : do not quantize SWA KV cache (#21332)
|
2026-04-03 09:07:01 +03:00 |
|
llama-kv-cache-iswa.h
|
|
|
|
llama-kv-cache.cpp
|
ggml : implement fast walsh-hadamard transform for kv rotation (#21352) (#22631)
|
2026-05-05 10:05:05 +08:00 |
|
llama-kv-cache.h
|
kv-cache : support attention rotation for heterogeneous iSWA (#21513)
|
2026-04-07 20:31:28 +03:00 |
|
llama-kv-cells.h
|
|
|
|
llama-memory-hybrid-iswa.cpp
|
memory: respect unified KV cache in hybrid memory for eval tasks (#21224)
|
2026-04-01 12:50:17 +03:00 |
|
llama-memory-hybrid-iswa.h
|
memory : add llama_memory_hybrid_iswa (#18601)
|
2026-01-21 14:30:23 +02:00 |
|
llama-memory-hybrid.cpp
|
memory: respect unified KV cache in hybrid memory for eval tasks (#21224)
|
2026-04-01 12:50:17 +03:00 |
|
llama-memory-hybrid.h
|
|
|
|
llama-memory-recurrent.cpp
|
llama : add option to save memory in device buffers (#22679)
|
2026-05-05 06:35:07 +03:00 |
|
llama-memory-recurrent.h
|
|
|
|
llama-memory.cpp
|
|
|
|
llama-memory.h
|
|
|
|
llama-mmap.cpp
|
Update llama-mmap to use ftello/fseeko (#22497)
|
2026-04-30 14:17:52 -07:00 |
|
llama-mmap.h
|
llama: fix llama-model-saver (#20503)
|
2026-03-25 12:53:16 +02:00 |
|
llama-model-loader.cpp
|
ggml: add Q1_0 1-bit quantization support (CPU) (#21273)
|
2026-04-06 20:55:21 +02:00 |
|
llama-model-loader.h
|
llama: fix llama-model-saver (#20503)
|
2026-03-25 12:53:16 +02:00 |
|
llama-model-saver.cpp
|
llama: fix llama-model-saver (#20503)
|
2026-03-25 12:53:16 +02:00 |
|
llama-model-saver.h
|
llama: fix llama-model-saver (#20503)
|
2026-03-25 12:53:16 +02:00 |
|
llama-model.cpp
|
model : don't crash on unsupported architecture (#22742)
|
2026-05-06 18:51:21 +02:00 |
|
llama-model.h
|
model: move load_hparams and load_tensors to per-model definition (#22004)
|
2026-05-04 12:36:59 +02:00 |
|
llama-quant.cpp
|
model: move load_hparams and load_tensors to per-model definition (#22004)
|
2026-05-04 12:36:59 +02:00 |
|
llama-quant.h
|
|
|
|
llama-sampler.cpp
|
llama : rename llama-sampling to llama-sampler (#19363)
|
2026-02-06 07:26:54 +01:00 |
|
llama-sampler.h
|
llama : rename llama-sampling to llama-sampler (#19363)
|
2026-02-06 07:26:54 +01:00 |
|
llama-vocab.cpp
|
vocab: add gemma4 tokenizer tests, fix edge case (#21534)
|
2026-04-09 11:41:14 +02:00 |
|
llama-vocab.h
|
vocab: fix Gemma4 tokenizer (#21343)
|
2026-04-03 10:33:03 +02:00 |
|
llama.cpp
|
common : only load backends when required (#22290)
|
2026-05-05 09:23:50 +02:00 |
|
unicode-data.cpp
|
|
|
|
unicode-data.h
|
|
|
|
unicode.cpp
|
unicode : add custom Qwen2 regex handler to fix segfault on long input (#21257)
|
2026-04-07 16:13:38 +03:00 |
|
unicode.h
|
vocab: fix Gemma4 tokenizer (#21343)
|
2026-04-03 10:33:03 +02:00 |