Concedo
8bd0a560f0
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# ggml/src/ggml-opencl/ggml-opencl.cpp
# ggml/src/ggml-sycl/ggml-sycl.cpp
# requirements/requirements-convert_hf_to_gguf_update.txt
# scripts/compare-llama-bench.py
# tests/test-backend-ops.cpp
# tests/test-chat.cpp
# tools/imatrix/README.md
# tools/imatrix/imatrix.cpp
# tools/llama-bench/llama-bench.cpp
2025-08-04 22:42:02 +08:00
Concedo
d37529c0cd
add sanitize flag
2025-08-04 22:19:23 +08:00
compilade
11a3811164
memory : handle kv_unified for hybrid models ( #15050 )
2025-08-03 21:43:07 +02:00
Csaba Kecskemeti
97366dc6ab
vocab : JetBrains Mellum pre-tokenizer ( #15045 )
2025-08-03 21:38:18 +02:00
Daniel Bevenius
4fdea540bd
kv-cache : skip alignment of n_stream in kv-cache log msg [no ci] ( #15040 )
...
This commit removes the right alignment the `n_stream` value in the
log message in the `llama_kv_cache_unified` constructor.
The motivation for this change is to enhance the readability of log
message. Currently the output looks like this:
```console
llama_kv_cache_unified: size = 2048.00 MiB ( 4096 cells, 32 layers, 1/ 1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
```
Notice that the `n_stream` value is right aligned, which makes it a
little harder to read.
With the change in this commit the output will look like
```console
llama_kv_cache_unified: size = 2048.00 MiB ( 4096 cells, 32 layers, 1/1 seqs), K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
```
2025-08-02 17:14:57 +03:00
Georgi Gerganov
a4569c41fd
llama : enable LLAMA_SET_ROWS=1 by default ( #14959 )
...
ggml-ci
2025-08-02 17:14:21 +03:00
Douglas Hanley
339bd0268c
model : support Qwen3-Embedding ( #15023 )
2025-08-02 10:44:50 +02:00
Concedo
f430916a71
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# docs/backend/CANN.md
# docs/multimodal/minicpmo2.6.md
# docs/multimodal/minicpmv2.5.md
# docs/multimodal/minicpmv2.6.md
# examples/speculative-simple/speculative-simple.cpp
# ggml/cmake/ggml-config.cmake.in
# ggml/src/ggml-cann/aclnn_ops.cpp
# ggml/src/ggml-cann/ggml-cann.cpp
# ggml/src/ggml-cpu/repack.cpp
# ggml/src/ggml-opencl/CMakeLists.txt
# ggml/src/ggml-opencl/ggml-opencl.cpp
# ggml/src/ggml-opencl/kernels/add.cl
# ggml/src/ggml-opencl/kernels/mul.cl
# scripts/compare-commits.sh
# scripts/compare-llama-bench.py
# scripts/sync-ggml.last
# tools/server/README.md
2025-08-02 10:25:10 +08:00
Concedo
b04362f831
Merge commit ' 00131d6eaf' into concedo_experimental
...
# Conflicts:
# docs/ops.md
# examples/save-load-state/save-load-state.cpp
# ggml/CMakeLists.txt
# ggml/src/ggml-cann/aclnn_ops.cpp
# ggml/src/ggml-cann/aclnn_ops.h
# ggml/src/ggml-cann/ggml-cann.cpp
# ggml/src/ggml-hip/CMakeLists.txt
# ggml/src/ggml-sycl/cpy.cpp
# ggml/src/ggml-sycl/cpy.hpp
# ggml/src/ggml-sycl/ggml-sycl.cpp
# ggml/src/ggml-sycl/set_rows.cpp
# scripts/server-bench.py
# tests/CMakeLists.txt
# tests/test-backend-ops.cpp
# tests/test-thread-safety.cpp
# tools/llama-bench/llama-bench.cpp
2025-08-02 10:15:39 +08:00
stevenkuang
0f5ccd6fd1
model : add hunyuan dense ( #14878 )
...
* support hunyuan_v1_dense
Signed-off-by: stevenkuang <stevenkuang@tencent.com>
* update hunyuan_moe to hunyuan_v1_moe
Signed-off-by: stevenkuang <stevenkuang@tencent.com>
* fix rope alpha assert and bos token
Signed-off-by: stevenkuang <stevenkuang@tencent.com>
* add blank line
Signed-off-by: stevenkuang <stevenkuang@tencent.com>
* Revert "update hunyuan_moe to hunyuan_v1_moe"
This reverts commit aa973ca21913aba77f6e81a935270ef7be222e75.
* use hunyuan_dense instead of hunyuan_v1_dense
Signed-off-by: stevenkuang <stevenkuang@tencent.com>
* fix hunyuan_moe chat template
Signed-off-by: stevenkuang <stevenkuang@tencent.com>
* remove leftover code
Signed-off-by: stevenkuang <stevenkuang@tencent.com>
* update hunyuan dense chat template
Signed-off-by: stevenkuang <stevenkuang@tencent.com>
* fix hunyuan dense vocab and chat template
Signed-off-by: stevenkuang <stevenkuang@tencent.com>
---------
Signed-off-by: stevenkuang <stevenkuang@tencent.com>
2025-08-01 15:31:12 +02:00
Georgi Gerganov
ba42794c9e
graph : fix equal_seq() check ( #14986 )
...
ggml-ci
2025-08-01 06:38:12 +03:00
Ed Addario
daf2dd7880
quantize : skip tensor override when in fallback mode ( #14995 )
2025-07-31 21:32:18 +02:00
Diego Devesa
d6818d06a6
llama : allow other bufts when overriding to CPU, add --no-repack option ( #14990 )
2025-07-31 18:11:34 +02:00
Dongliang Wei
c1dacaa99b
llama : merge build_moe_ffn_from_probs function into build_moe_ffn ( #14968 )
2025-07-31 14:12:20 +02:00
Aman Gupta
8a4a856277
Add LLaDA 8b Diffusion model ( #14771 )
...
* Add support for Llada-8b: diffusion model
* Add README
* Fix README and convert_hf_to_gguf
* convert_hf_to_gguf.py: address review comments
* Make everything in a single example
* Remove model-specific sampling
* Remove unused argmax
* Remove braced initializers, improve README.md a bit
* Add diffusion specific gguf params in set_vocab, remove setting rope_theta and rms_norm_eps
* Remove adding the mask token
* Move add_add_bos_token to set_vocab
* use add_bool in gguf_writer.py
2025-07-31 19:49:09 +08:00
compilade
66625a59a5
graph : reduce splits for recurrent and hybrid models ( #14825 )
...
* graph : avoid creating redundant s_copy views
* graph : comment the s_copy views
2025-07-31 08:02:46 +03:00
Georgi Gerganov
00131d6eaf
tests : update for LLAMA_SET_ROWS=1 ( #14961 )
...
* test-thread-safety : each context uses a single sequence
* embedding : handle --parallel argument
ggml-ci
* save-load : handle -np 1
ggml-ci
* thread-safety : avoid overriding threads, reduce test case arg
ggml-ci
2025-07-30 15:12:02 +03:00
Georgi Gerganov
1e15bfd42c
graph : fix stack-use-after-return ( #14960 )
...
ggml-ci
2025-07-30 13:52:11 +03:00
Douglas Hanley
a118d80233
embeddings: fix extraction of CLS pooling results ( #14927 )
...
* embeddings: fix extraction of CLS pooling results
* merge RANK pooling into CLS case for inputs
2025-07-30 08:25:05 +03:00
Concedo
b8425f5a9c
merge but voxtral not working
2025-07-28 22:08:05 +08:00
Dongliang Wei
6c6e397aff
model : add support for SmallThinker series ( #14898 )
...
* support smallthinker
* support 20b softmax, 4b no sliding window
* new build_moe_ffn_from_probs, and can run 4b
* fix 4b rope bug
* fix python type check
* remove is_moe judge
* remove set_dense_start_swa_pattern function and modify set_swa_pattern function
* trim trailing whitespace
* remove get_vocab_base of SmallThinkerModel in convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* better whitespace
Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* use GGML_ASSERT for expert count validation
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Improve null pointer check for probs
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* use template parameter for SWA attention logic
* better whitespace
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* move the creation of inp_out_ids before the layer loop
* remove redundant judge for probs
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-07-28 13:47:00 +02:00
Daniel Bevenius
ca0ef2dddb
llama : clarify comment about pp and tg graphs [no ci] ( #14895 )
...
* llama : clarify comment about pp and tg graphs [no ci]
This commit clarifies the comment in `llama-context.cpp` regarding the
prefill prompt (pp), and token generation (tg) graphs.
The motivation for this is that I've struggled to remember these and had
to look them up more than once, so I thought it would be helpful to add
a comment that makes it clear what these stand for.
* squash! llama : clarify comment about pp and tg graphs [no ci]
Change "pp" to "prompt processing".
2025-07-27 12:10:51 +02:00
Concedo
21b7d0a899
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .devops/rocm.Dockerfile
# docs/build-s390x.md
# docs/development/HOWTO-add-model.md
# docs/ops.md
# docs/ops/CPU.csv
# docs/ops/CUDA.csv
# ggml/CMakeLists.txt
# ggml/src/ggml-cann/acl_tensor.cpp
# ggml/src/ggml-cann/aclnn_ops.cpp
# ggml/src/ggml-cann/aclnn_ops.h
# ggml/src/ggml-cann/ggml-cann.cpp
# ggml/src/ggml-cpu/CMakeLists.txt
# ggml/src/ggml-opencl/ggml-opencl.cpp
# ggml/src/ggml-opencl/kernels/rms_norm.cl
# scripts/create_ops_docs.py
# tests/test-backend-ops.cpp
# tools/export-lora/export-lora.cpp
2025-07-27 17:10:53 +08:00
Gabriel Larson
4762ad7316
model : make rope_yarn_log_mul optional for deepseek2 ( #14896 )
...
* make rope_yarn_log_mul optional for deepseek2
* default rope_yarn_log_mul = 0.0f
2025-07-27 11:18:37 +03:00
Shunta Saito
1dc9614e06
llama : fix kq_scale for the attention layers of PLaMo2 ( #14892 )
...
* Fix dimensions for expand
* Change dimensions to copy states to cache
* Fix the default value for plamo2 conversion
* Fix scale given to build_attn
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-07-27 09:38:44 +02:00
Concedo
0fcfbdb93c
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .devops/musa.Dockerfile
# .github/workflows/build.yml
# .github/workflows/close-issue.yml
# ci/README.md
# docs/build.md
# docs/docker.md
# ggml/CMakeLists.txt
# ggml/cmake/ggml-config.cmake.in
# ggml/src/ggml-cann/aclnn_ops.cpp
# ggml/src/ggml-cann/aclnn_ops.h
# ggml/src/ggml-cann/ggml-cann.cpp
# ggml/src/ggml-cpu/CMakeLists.txt
# ggml/src/ggml-cuda/fattn-wmma-f16.cu
# ggml/src/ggml-musa/CMakeLists.txt
# ggml/src/ggml-rpc/ggml-rpc.cpp
# ggml/src/ggml-sycl/ggml-sycl.cpp
# ggml/src/ggml-sycl/vecdotq.hpp
# scripts/sync-ggml.last
# tests/test-backend-ops.cpp
# tools/imatrix/README.md
# tools/imatrix/imatrix.cpp
2025-07-25 19:53:13 +08:00
Georgi Gerganov
c1dbea752a
context : restore preemptive sched reset when LLAMA_SET_ROWS=0 ( #14870 )
...
ggml-ci
2025-07-25 14:28:06 +03:00
Georgi Gerganov
e4868d16d2
context : perform output reorder lazily upon access after sync ( #14853 )
...
* context : perform output reorder after lazily upon access after sync
ggml-ci
* cont : add TODO
2025-07-24 16:31:48 +03:00
Xuan-Son Nguyen
820de57d4f
chat : fix kimi-k2 chat template ( #14852 )
2025-07-24 13:59:56 +02:00
yummy
86f5623d90
llama : fix MiniCPM inference after Granite Four changes ( #14850 )
...
MiniCPM models use the llm_build_granite constructor which was changed
in the Granite Four PR to use hparams.rope_finetuned instead of a
use_rope parameter. MiniCPM models need rope enabled by default.
Fixes inference from gibberish to correct responses.
2025-07-24 11:50:51 +02:00
l3utterfly
7233358d29
memory : handle saving/loading null layers in recurrent memory ( #14675 )
...
* Update llama-memory-recurrent.cpp
handle saving/loading null layers in recurrent memory
* fixed styling issues and updated comments
* fix styling issue
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-07-23 11:16:41 +03:00
Molly Sophia
d4d1522b20
llama : add model type detection for rwkv7 7B&14B ( #14816 )
...
Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
2025-07-22 23:01:29 +08:00
Concedo
9f4d0f6ccf
fixed swa pp bug by retrying smaller batches
2025-07-21 23:34:22 +08:00
Concedo
30675b0798
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# CODEOWNERS
# docs/build.md
# scripts/sync-ggml.last
# tests/test-backend-ops.cpp
# tools/imatrix/README.md
# tools/imatrix/imatrix.cpp
2025-07-20 22:47:31 +08:00
Georgi Gerganov
bf9087f59a
metal : fuse add, mul + add tests ( #14596 )
...
ggml-ci
2025-07-18 20:37:26 +03:00
Georgi Gerganov
9fb1042ce6
graph : fix graph reuse reset of params ( #14760 )
...
ggml-ci
2025-07-18 20:08:33 +03:00
Concedo
b0b7a07b34
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# examples/parallel/parallel.cpp
2025-07-18 23:49:45 +08:00
Georgi Gerganov
d498af3d5a
graph : avoid huge warm-up graphs for MoE models ( #14753 )
...
* graph : avoid huge warm-up graphs for MoE models
ggml-ci
* cont : bump max nodes to 8x model tensors
2025-07-18 14:31:15 +03:00
Georgi Gerganov
eacdeb5bfc
model : fix build after merge conflict ( #14754 )
2025-07-18 11:53:55 +03:00
lgai-exaone
e0cb5c5cb8
model : add EXAONE 4.0 support ( #14630 )
2025-07-18 10:45:49 +02:00
Concedo
b8e3280432
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .devops/nix/package.nix
# ggml/src/ggml-sycl/ggml-sycl.cpp
2025-07-18 13:46:32 +08:00
Georgi Gerganov
8f974bc1e9
graph : refactor context to not pass gf explicitly ( #14629 )
...
ggml-ci
2025-07-18 08:29:28 +03:00
Nexes the Elder
09651d09ff
graph : Pass the graph placeholder message in debug mode ( #14748 )
...
Without that condition, this debug log clutters the screen every batch treated in the prompt processing, or every token generated in Kobold.cpp.
2025-07-18 07:25:54 +03:00
Piotr Wilkin (ilintar)
cb887f1bc1
model: add Ernie 4.5 MoE support ( #14658 )
...
* Add Ernie4.5 MoE
* Fix Flake errors.
* Properly encode/decode MoE layer step
* Correct tensor mappings (.weight)
* Pass and read n_ff_exp
* n_ff_shexp calculation and further minor changes
* Rope fixes.
* .gitignore fix
* Add unit32 cast for Linux builds
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Further fixes from code review
* Fix trailing whitespace
* Reenable missing experts error
* Code style from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Fix non-MoE regression
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-07-17 23:15:32 +02:00
Georgi Gerganov
d6fb3f6b49
kv-cache : fix k-shift for multiple streams ( #14742 )
...
ggml-ci
2025-07-17 20:52:33 +03:00
Georgi Gerganov
01612b7409
llama : reuse compute graphs ( #14482 )
...
* llama : reuse compute graphs
ggml-ci
* llama-bench : add graph reuse parameter
ggml-ci
* cont : remove the parameter and the sched resets
ggml-ci
* graph : rename update() to can_reuse()
ggml-ci
* params : remove is_same()
ggml-ci
* graph : set res->params in llm_graph_context constructor
ggml-ci
* graph : avoid set_max_nodes in llm_graph_result
ggml-ci
* kv-cache : reuse llama_context's graph result instance
ggml-ci
* context : reset the previous graph result upon memory updates
ggml-ci
* batch : llama_ubatch now carries its data instead of pointing to balloc
ggml-ci
* merge : fix build
ggml-ci
* graph : fix can_reuse() checks when flash-attention is disabled
* graph : move llm_graph_result impl in source file + debug env
ggml-ci
2025-07-17 19:08:33 +03:00
Concedo
f57018f722
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .github/workflows/build-linux-cross.yml
2025-07-17 18:23:26 +08:00
Tarek Dakhran
086cf81e88
llama : fix parallel processing for lfm2 ( #14705 )
2025-07-17 09:22:11 +02:00
Georgi Gerganov
d9b691081c
kv-cache : opt mask set input ( #14600 )
...
ggml-ci
2025-07-17 09:49:15 +03:00
Georgi Gerganov
ad57d3edd2
batch : fix uninitialized has_cpl flag ( #14733 )
...
ggml-ci
2025-07-17 09:45:54 +03:00