| .. |
|
models
|
refactor : llama-model.cpp (#16252)
|
2025-10-31 23:40:23 +01:00 |
|
llama-adapter.cpp
|
aLoRA Support (#15327)
|
2025-09-05 17:32:39 -06:00 |
|
llama-adapter.h
|
aLoRA Support (#15327)
|
2025-09-05 17:32:39 -06:00 |
|
llama-arch.cpp
|
model : Minimax M2 (#16831)
|
2025-10-31 21:20:47 +01:00 |
|
llama-arch.h
|
model : Minimax M2 (#16831)
|
2025-10-31 21:20:47 +01:00 |
|
llama-batch.cpp
|
batch : fix consistency checks for the input positions (#16890)
|
2025-10-31 13:50:33 +02:00 |
|
llama-batch.h
|
llama: store mrope data in KV cell (#16825)
|
2025-10-29 18:09:18 +01:00 |
|
llama-chat.cpp
|
model : add BailingMoeV2 support (#16063)
|
2025-10-20 21:38:20 +02:00 |
|
llama-chat.h
|
model : add BailingMoeV2 support (#16063)
|
2025-10-20 21:38:20 +02:00 |
|
llama-context.cpp
|
Merge commit '5a4ff43e7d' into concedo_experimental
|
2025-10-30 13:13:00 +08:00 |
|
llama-context.h
|
llama: print memory breakdown on exit (#15860)
|
2025-09-24 16:53:48 +02:00 |
|
llama-cparams.cpp
|
cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188)
|
2025-06-15 10:08:58 +03:00 |
|
llama-cparams.h
|
llama : bump max seq limit from 64 to 256 (#15916)
|
2025-09-18 12:47:56 +03:00 |
|
llama-grammar.cpp
|
Add memoized cache to llama_grammar_reject_candidates_for_stack (#1615)
|
2025-06-25 19:22:19 +08:00 |
|
llama-grammar.h
|
tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034)
|
2025-03-05 13:05:13 +00:00 |
|
llama-graph.cpp
|
Kcpp triage for rowsplit: revert https://github.com/ggml-org/llama.cpp/pull/16715 until https://github.com/ggml-org/llama.cpp/issues/16799 is resolved
|
2025-11-02 09:58:41 +08:00 |
|
llama-graph.h
|
Revert "graph : support cacheless embeddings with FA and iSWA"
|
2025-10-16 12:07:48 +08:00 |
|
llama-hparams.cpp
|
model: add support for qwen3vl series (#16780)
|
2025-10-30 16:19:14 +01:00 |
|
llama-hparams.h
|
model: add support for qwen3vl series (#16780)
|
2025-10-30 16:19:14 +01:00 |
|
llama-impl.cpp
|
extend log
|
2025-06-26 18:52:44 +08:00 |
|
llama-impl.h
|
llama: use FA + max. GPU layers by default (#15434)
|
2025-08-30 16:32:10 +02:00 |
|
llama-io.cpp
|
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
2025-03-13 12:35:44 +02:00 |
|
llama-io.h
|
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
2025-03-13 12:35:44 +02:00 |
|
llama-kv-cache-iswa.cpp
|
Merge commit '128d522c04' into concedo_experimental
|
2025-10-04 23:51:22 +08:00 |
|
llama-kv-cache-iswa.h
|
llama: print memory breakdown on exit (#15860)
|
2025-09-24 16:53:48 +02:00 |
|
llama-kv-cache.cpp
|
Merge branch 'upstream' into concedo_experimental
|
2025-10-31 10:52:57 +08:00 |
|
llama-kv-cache.h
|
memory : remove KV cache size padding (#16812)
|
2025-10-28 20:19:44 +02:00 |
|
llama-kv-cells.h
|
llama: store mrope data in KV cell (#16825)
|
2025-10-29 18:09:18 +01:00 |
|
llama-memory-hybrid.cpp
|
memory : use sequential equal splits for recurrent modules (#16442)
|
2025-10-07 08:24:17 +03:00 |
|
llama-memory-hybrid.h
|
llama: print memory breakdown on exit (#15860)
|
2025-09-24 16:53:48 +02:00 |
|
llama-memory-recurrent.cpp
|
Merge branch 'upstream' into concedo_experimental
|
2025-10-30 13:44:46 +08:00 |
|
llama-memory-recurrent.h
|
llama: consistent ctx <-> buf order for KV cache (#16746)
|
2025-10-28 11:23:54 +01:00 |
|
llama-memory.cpp
|
memory : correctly handle failure in apply() (#14438)
|
2025-06-30 18:03:03 +03:00 |
|
llama-memory.h
|
llama: print memory breakdown on exit (#15860)
|
2025-09-24 16:53:48 +02:00 |
|
llama-mmap.cpp
|
readjusted mistral and oai template, fixed compile issue on termux, updated lite, show generated token ids in debug mode
|
2025-08-07 21:14:48 +08:00 |
|
llama-mmap.h
|
llama-mmap: fix missing include (#11796)
|
2025-02-10 20:58:18 +02:00 |
|
llama-model-loader.cpp
|
Merge branch 'upstream' into concedo_experimental
|
2025-10-03 16:44:33 +08:00 |
|
llama-model-loader.h
|
model: support GLM 4.5 family of models (#14939)
|
2025-08-04 20:29:25 +02:00 |
|
llama-model-saver.cpp
|
llama : improve sep token handling (#14272)
|
2025-06-20 14:04:09 +02:00 |
|
llama-model-saver.h
|
llama/ggml: add LLM training support (#10544)
|
2025-05-12 14:44:49 +02:00 |
|
llama-model.cpp
|
Merge commit 'bea04522ff' into concedo_experimental
|
2025-11-05 12:41:01 +08:00 |
|
llama-model.h
|
model : Minimax M2 (#16831)
|
2025-10-31 21:20:47 +01:00 |
|
llama-quant.cpp
|
Merge branch 'upstream' into concedo_experimental
|
2025-10-31 10:52:57 +08:00 |
|
llama-quant.h
|
llama : refactor src/llama.cpp (#10902)
|
2025-01-03 10:18:53 +02:00 |
|
llama-sampling.cpp
|
vocab : mark EOT token for Granite models (#16499)
|
2025-10-10 17:17:31 +03:00 |
|
llama-sampling.h
|
llama : add llama_vocab, functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
|
llama-vocab.cpp
|
Merge commit 'bea04522ff' into concedo_experimental
|
2025-11-05 12:41:01 +08:00 |
|
llama-vocab.h
|
Merge commit 'bea04522ff' into concedo_experimental
|
2025-11-05 12:41:01 +08:00 |
|
llama.cpp
|
before merging conflicting round
|
2025-10-16 12:15:44 +08:00 |
|
unicode-data.cpp
|
server : better security control for public deployments (#9776)
|
2024-10-08 13:27:04 +02:00 |
|
unicode-data.h
|
llama : reduce compile time and binary size (#9712)
|
2024-10-02 15:49:55 +02:00 |
|
unicode.cpp
|
Merge branch 'upstream' into concedo_experimental
|
2025-07-16 12:03:54 +08:00 |
|
unicode.h
|
devops: add s390x & ppc64le CI (#15925)
|
2025-09-27 02:03:33 +08:00 |