Concedo
1daeed5d4d
Merge commit ' 9963b81f63' into concedo_experimental
...
# Conflicts:
# .github/workflows/server.yml
# SECURITY.md
# docs/backend/SYCL.md
# examples/model-conversion/README.md
# examples/model-conversion/scripts/embedding/compare-embeddings-logits.sh
# ggml/src/ggml-hexagon/ggml-hexagon.cpp
# ggml/src/ggml-hexagon/htp/matmul-ops.c
# tests/CMakeLists.txt
# tests/test-chat.cpp
# tests/test-json-schema-to-grammar.cpp
2025-12-17 20:30:34 +08:00
Concedo
c93c4c5505
Merge commit ' 4a4f7e6550' into concedo_experimental
...
# Conflicts:
# .github/ISSUE_TEMPLATE/011-bug-results.yml
# CODEOWNERS
# README.md
# ci/run.sh
# docs/development/HOWTO-add-model.md
# grammars/README.md
# src/llama-context.cpp
# src/llama.cpp
# tools/CMakeLists.txt
# tools/completion/README.md
# tools/llama-bench/README.md
2025-12-17 14:30:39 +08:00
Concedo
050a5b1f52
Merge commit ' 4aced7a631' into concedo_experimental
...
# Conflicts:
# .devops/cann.Dockerfile
# .devops/cpu.Dockerfile
# .devops/cuda.Dockerfile
# .devops/intel.Dockerfile
# .devops/musa.Dockerfile
# .devops/rocm.Dockerfile
# .devops/tools.sh
# .devops/vulkan.Dockerfile
# .github/workflows/build.yml
# .github/workflows/release.yml
# .gitignore
# docs/ops.md
# docs/ops/SYCL.csv
# examples/batched/batched.cpp
# examples/eval-callback/eval-callback.cpp
# examples/gen-docs/gen-docs.cpp
# examples/lookahead/lookahead.cpp
# examples/lookup/lookup-create.cpp
# examples/lookup/lookup-stats.cpp
# examples/lookup/lookup.cpp
# examples/model-conversion/scripts/causal/compare-logits.py
# examples/model-conversion/scripts/causal/run-org-model.py
# examples/model-conversion/scripts/utils/check-nmse.py
# examples/parallel/parallel.cpp
# examples/retrieval/retrieval.cpp
# examples/save-load-state/save-load-state.cpp
# examples/speculative-simple/speculative-simple.cpp
# examples/speculative/speculative.cpp
# examples/training/finetune.cpp
# ggml/CMakeLists.txt
# ggml/src/ggml-cann/ggml-cann.cpp
# ggml/src/ggml-cpu/repack.cpp
# ggml/src/ggml-sycl/common.hpp
# ggml/src/ggml-sycl/convert.cpp
# ggml/src/ggml-sycl/dequantize.hpp
# ggml/src/ggml-sycl/dpct/helper.hpp
# ggml/src/ggml-sycl/element_wise.cpp
# ggml/src/ggml-sycl/element_wise.hpp
# ggml/src/ggml-sycl/ggml-sycl.cpp
# ggml/src/ggml-sycl/mmvq.cpp
# ggml/src/ggml-sycl/pad.cpp
# ggml/src/ggml-sycl/ssm_conv.cpp
# ggml/src/ggml-sycl/vecdotq.hpp
# pyrightconfig.json
# scripts/sync-ggml.last
# tests/test-arg-parser.cpp
# tests/test-backend-ops.cpp
# tools/cvector-generator/cvector-generator.cpp
# tools/imatrix/imatrix.cpp
# tools/mtmd/CMakeLists.txt
# tools/mtmd/clip.cpp
# tools/perplexity/perplexity.cpp
# tools/server/README.md
2025-12-16 23:14:12 +08:00
Daniel Bevenius
2995341730
llama : add support for NVIDIA Nemotron 3 Nano ( #18058 )
...
* llama : add support for NVIDIA Nemotron Nano 3
This commit adds support for the NVIDIA Nemotron Nano 3 model, enabling
the conversion and running of this model.
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-12-16 07:19:26 +01:00
HelloKS
9d52f17ae3
model : add KORMo model ( #18032 )
...
* vocab: add KORMo Tokenizer
* model: add KORMoForCausalLM
* vocab: change pretokenizer to qwen2
* lint: fix unintended line removal
* model: make qwen2 bias tensor optional
* model: use qwen2 architecture for KORMo
2025-12-15 18:51:43 +01:00
Johannes Gäßler
b1f3a6e5db
llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization ( #16653 )
...
* llama: automatically fit args to free memory
llama-fit-params tool
* fix CI
* hints for bug reports, ensure no reallocation
* fix segfault with Vulkan
* add llama-fit-params to CI
* fix CI
* fix CI
* fix CI
* minor adjustments
* fix assignment of 1 dense layer
* fix logger not being reset on model load failure
* remove --n-gpu-layer hint on model load failure
* fix llama-fit-params verbosity
* fix edge case
* fix typo [no ci]
2025-12-15 09:24:59 +01:00
Xuan-Son Nguyen
0759b09c90
graph: add f_attn_temp_offset ( #18025 )
2025-12-14 13:05:59 +01:00
Georgi Gerganov
609a2d0268
models : fix YaRN regression + consolidate logic ( #18006 )
...
* models : fix YaRN regression + consolidate logic
* cont : fix the fix
* cont : remove header
* cont : add header
2025-12-14 08:34:56 +02:00
Georgi Gerganov
7bed317f53
models : fix the attn_factor for mistral3 graphs + improve consistency ( #17945 )
...
* models : fix the attn_factor for mistral3 graphs
* cont : rework attn_factor correction logic
* cont : make deepseek2 consistent
* cont : add TODO
* cont : special-case DSv2
* cont : revert Mistral 3 Large changes
* cont : fix DS2 to use the original attn_factor
* cont : minor comments
2025-12-12 17:12:40 +02:00
Concedo
34d243bf3c
Merge commit ' b677721819' into concedo_experimental
...
# Conflicts:
# CONTRIBUTING.md
# common/chat.cpp
# docs/ops.md
# docs/ops/CPU.csv
# docs/ops/CUDA.csv
# docs/ops/OpenCL.csv
# ggml/src/ggml-cann/aclnn_ops.cpp
# ggml/src/ggml-cann/common.h
# ggml/src/ggml-cann/ggml-cann.cpp
# ggml/src/ggml-sycl/softmax.cpp
# grammars/README.md
# src/CMakeLists.txt
# tests/test-backend-ops.cpp
# tests/test-chat.cpp
# tests/test-grammar-integration.cpp
# tests/test-grammar-parser.cpp
# tests/test-llama-grammar.cpp
# tools/mtmd/CMakeLists.txt
2025-12-11 23:33:19 +08:00
Concedo
278e45becf
Merge commit ' 2fa51c19b0' into concedo_experimental
...
# Conflicts:
# .github/actions/windows-setup-cuda/action.yml
# .github/workflows/build-linux-cross.yml
# .github/workflows/release.yml
# README.md
# docs/build-riscv64-spacemit.md
# examples/model-conversion/logits.cpp
# ggml/CMakeLists.txt
# ggml/src/ggml-cpu/CMakeLists.txt
# models/templates/Kimi-K2-Instruct.jinja
# models/templates/Kimi-K2-Thinking.jinja
# tests/test-chat.cpp
# tools/server/README.md
2025-12-11 23:04:48 +08:00
Eric Zhang
b677721819
model : Qwen3-Next-80B-A3B has 48 layers ( #17898 )
...
* model : Qwen3-Next-80B-A3B has 48 layers
* model : Add 80B-A3B type name
2025-12-10 15:22:40 +01:00
Sigbjørn Skjæret
42b12b5608
model : nit, DeepSeek V1 MoE is 16B and GigaChat is 20B ( #12652 )
...
* nit, DeepSeek V1 MoE is 16B
* base type on n_ff_exp instead
2025-12-09 12:15:06 +01:00
philip-essential
1d2a1ab73d
model : support Rnj-1 ( #17811 )
...
* add support for rnj1
* refactor gemma3 to support rnj-1
* address review comments
2025-12-09 04:49:03 +01:00
Xuan-Son Nguyen
4d3726278b
model: add llama 4 scaling for mistral-large (deepseek arch) ( #17744 )
2025-12-07 22:29:54 +01:00
Concedo
03cec02a3d
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .github/workflows/build.yml
# .github/workflows/release.yml
# .github/workflows/winget.yml
# CODEOWNERS
# README.md
# ci/run.sh
# docs/build.md
# docs/ops.md
# docs/ops/Vulkan.csv
# ggml/CMakeLists.txt
# ggml/src/ggml-cann/ggml-cann.cpp
# ggml/src/ggml-sycl/ggml-sycl.cpp
# scripts/sync_vendor.py
# src/CMakeLists.txt
# tests/test-json-schema-to-grammar.cpp
# tests/test-quantize-stats.cpp
# tools/server/CMakeLists.txt
# tools/server/README.md
2025-12-03 18:56:31 +08:00
Herman Semenoff
37adc9c6ba
ggml, llama : use defaulted constructors/destructors ( #17649 )
2025-12-03 07:12:18 +01:00
Piotr Wilkin (ilintar)
746f9ee889
Override SSM_A op for Qwen3 Next to reduce splits ( #17587 )
...
* Override SSM_A op for Qwen3 Next to reduce splits
* New tensor mapping SSM_A_NOSCAN for SSM_A used outside of OP_SSM_SCAN context.
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-12-02 00:43:13 +01:00
Gilad S.
00c361fe53
fix: llama arch implementation ( #17665 )
2025-12-01 21:21:13 +01:00
Xuan-Son Nguyen
cd3c118908
model: support Ministral3 ( #17644 )
...
* conversion script
* support ministral 3
* maybe this is better?
* add TODO for rope_yarn_log_mul
* better ppl (tested on 14B-Instruct)
* Add Ministral3 support to Mistral format
* improve arch handling
* add sizes
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* nits
---------
Co-authored-by: Julien Denize <julien.denize@mistral.ai>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-12-01 12:26:52 +01:00
Concedo
0ccb298087
Merge commit ' ddf9f94389' into concedo_experimental
...
# Conflicts:
# examples/model-conversion/scripts/causal/run-converted-model.sh
# examples/model-conversion/scripts/causal/run-org-model.py
# src/CMakeLists.txt
# src/llama-quant.cpp
# tools/server/README.md
2025-11-28 23:27:50 +08:00
Piotr Wilkin (ilintar)
ff55414c42
model : Qwen3 Next ( #16095 )
...
* Qwen3 Next - cleaned up version
* Whitespaces and stuff
* Correct minor errors
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Misc. fixes.
* Clean up code, add missing hybrid qualifier
* Did someone transpose the SOLVE_TRI result matrix? Perhaps...
* Whitespace
* Proper tensors for cb calls
* Use llama-graph.h vertical alignment
* BROKEN: chunking
* Set new tensors as inputs.
* Proper chunk logic
* It's the circle of life...
* More shenanigans for n_seq > 1
* Nail in the coffin?
* Fix Windows build
* Eh, one fails on Windows, the other fails on Mac... just use general capture.
* quant : cleanup
* model : cleanup
* qwen3 : cleanup
* cont : cleanup
* cont : cleanup
* ggml : revert change
* qwen3 : cleanup
* cont : cleanup
* Readd cmath
* qwen3 : fix typo
* Update convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Usual suspects
* fix my bad suggestion
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-11-28 12:02:56 +01:00
Concedo
eda4a312cb
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .devops/vulkan.Dockerfile
# ggml/src/ggml-cpu/CMakeLists.txt
# ggml/src/ggml-opencl/CMakeLists.txt
# ggml/src/ggml-opencl/ggml-opencl.cpp
# ggml/src/ggml-sycl/common.hpp
# tests/test-backend-ops.cpp
# tools/server/README.md
2025-11-28 13:22:02 +08:00
Georgi Gerganov
6783b11fb0
models : fix LFM2 tensors ( #17548 )
2025-11-27 16:04:29 +02:00
Concedo
724763fdec
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .devops/vulkan.Dockerfile
# .github/workflows/build.yml
# .github/workflows/server.yml
# common/common.cpp
# examples/batched/README.md
# ggml/CMakeLists.txt
# ggml/src/CMakeLists.txt
# ggml/src/ggml-cann/ggml-cann.cpp
# ggml/src/ggml-cpu/CMakeLists.txt
# ggml/src/ggml-cpu/arch-fallback.h
# ggml/src/ggml-opencl/ggml-opencl.cpp
# scripts/sync-ggml.last
# src/CMakeLists.txt
# tests/test-backend-ops.cpp
# tools/server/CMakeLists.txt
2025-11-25 16:38:07 +08:00
Aaron Teo
877566d512
llama: introduce support for model-embedded sampling parameters ( #17120 )
Check Pre-Tokenizer Hashes / pre-tokenizer-hashes (push) Waiting to run
Python check requirements.txt / check-requirements (push) Waiting to run
Python Type-Check / pyright type-check (push) Waiting to run
2025-11-25 09:56:07 +08:00
william pan
4902eebe33
models : Added support for RND1 Diffusion Language Model ( #17433 )
...
* Converted RND1 model to GGUF weights
* RND1 llama.cpp support v1
* RND1 llama.cpp support v2 non causal bug
* RND1 llama.cpp support v3 doccumentation
* RND1 llama.cpp support v4 clean code
* linting issues
* RND1 pr fixes v1
* RND1 pr fixes v2
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Diffusion documentation edits
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-11-24 14:16:56 +08:00
ubergarm
23bc779a6e
model : detect GigaChat3-10-A1.8B as deepseek lite ( #17420 )
...
* Detect GigaChat3-10-A1.8B as deepseek lite
Hardcodes checking number of layers to detect if lite version of deepseek.
* Add commnent identifying deepseek lite variants
deepseek lite variants include DeepSeek-V2-Lite, GigaChat3-10B-A1.8B
2025-11-21 14:51:38 +01:00
LostRuins Concedo
3fe0e39b62
Merge commit ' 4dca015b7e' into concedo_experimental
...
# Conflicts:
# .github/copilot-instructions.md
# README.md
# docs/ops.md
# docs/ops/CPU.csv
# docs/ops/CUDA.csv
# docs/ops/Vulkan.csv
# ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp
# src/CMakeLists.txt
# tests/test-backend-ops.cpp
2025-11-16 18:33:58 +08:00
Bartowski
e1fcf8b09b
model : add AfmoeForCausalLM support ( #16477 )
...
* Add AFMOE model support
* Update to vocab
* Add model sizing
* Undo Rope change for ARCEE model
* Address review comments
* Update modeling code is_sliding -> use_rope, replace hard-coded logic
* Fix AFMOE tokenizer
* Update convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Update convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Update AFMoE tokenizer class identification to be more unique
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-11-14 13:54:10 +01:00
LostRuins Concedo
d6a2ad8455
still not really working right
2025-11-09 01:57:48 +08:00
LostRuins Concedo
e6ca0aa8d0
Merge commit ' 2f0c2db43e' into concedo_experimental
...
# Conflicts:
# .github/labeler.yml
# README.md
# docs/backend/OPENCL.md
# docs/ops.md
# docs/ops/CUDA.csv
# ggml/src/ggml-webgpu/ggml-webgpu.cpp
# ggml/src/ggml-webgpu/wgsl-shaders/set_rows.tmpl.wgsl
# scripts/sync-ggml.last
# src/CMakeLists.txt
# tools/server/README.md
2025-11-08 23:27:59 +08:00
LostRuins Concedo
fdcb281a3a
Merge commit ' 2f966b8ed8' into concedo_experimental
...
# Conflicts:
# .github/workflows/release.yml
# docs/docker.md
# ggml/src/CMakeLists.txt
# ggml/src/ggml-cpu/CMakeLists.txt
# tests/test-backend-ops.cpp
# tests/test-thread-safety.cpp
# tools/batched-bench/batched-bench.cpp
# tools/mtmd/clip.cpp
2025-11-08 10:34:17 +08:00
Sigbjørn Skjæret
9008027aa3
hparams : add n_embd_inp() to support extended embed ( #16928 )
...
* add n_embd_full to support extended embed
* don't change output
* rename to n_embd_inp
* restore n_embd where applicable
2025-11-07 19:27:58 +01:00
Li Pengzhan
9f052478c2
model : add openPangu-Embedded ( #16941 )
...
* Model: add openPangu-Embedded
* fixed according to reviewer's comments
* fixed the chat template check condition
* Apply suggestions from code review
change the chat-template check condition and some formatting issue
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* whitespace cleanup
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-11-05 10:28:58 +01:00
LostRuins Concedo
fc80cdccc2
Merge commit ' bea04522ff' into concedo_experimental
...
# Conflicts:
# scripts/sync-ggml.last
# src/CMakeLists.txt
# tests/test-backend-ops.cpp
2025-11-05 12:41:01 +08:00
Georgi Gerganov
cd5e3b5754
server : support unified cache across slots ( #16736 )
...
* server : support unified context across slots
* cont : fix speculative decoding initialization
* context : fix n_ctx_per_seq computation
* server : purge slots one by one
* tests : add unified cache server tests
* llama : update per-seq context computation
* test-thread-safety : handle tiny training context of the input model
* server : fix server_tokens clear()
* server : use 4 slots + unified KV by default
* llama : add note about context size queries
* cont : update todos [no ci]
* context : do not cap the size of the context
* tests : adjust parameters to be CI friendlier
* context : add warning
2025-11-02 18:14:04 +02:00
Piotr Wilkin (ilintar)
bea04522ff
refactor : llama-model.cpp ( #16252 )
...
* Sqashed: llama-model.cpp refactoring
* Fix formatting of attn / ffn / ffn_moe calls
* Fix import regression / unify spacing in models.h
* totally DID NOT miss those!
* Add missing qwen3vl(moe) models
* Add missing new .cpp files to build
* Remove extra semicolons
* Editor checker
* Update src/models/models.h
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-10-31 23:40:23 +01:00
Piotr Wilkin (ilintar)
0de0a01576
model : Minimax M2 ( #16831 )
...
* Model: Minimax M2
* Cleanup
* Cleanup pt. 2
* Cleanup pt. 3
* Update convert_hf_to_gguf_update.py - merge catch blocks
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Remove vocab models and test
* Remove all redundant hparam settings covered by TextModel
* Move super to start, don't set block_count
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Update gguf-py/gguf/constants.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-10-31 21:20:47 +01:00
Giuseppe Scrivano
e58d585604
model : add Granite Hybrid nano types ( #16896 )
...
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2025-10-31 21:20:07 +01:00
Concedo
2b00e55356
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .github/workflows/docker.yml
# ggml/src/ggml-opencl/kernels/mul_mm_f16_f32_l4_lm.cl
# ggml/src/ggml-opencl/kernels/mul_mm_f32_f32_l4_lm.cl
# ggml/src/ggml-sycl/rope.cpp
# ggml/src/ggml-webgpu/wgsl-shaders/rope.tmpl.wgsl
# requirements/requirements-convert_legacy_llama.txt
# tests/test-backend-ops.cpp
# tests/test-rope.cpp
# tools/server/README.md
2025-10-31 10:52:57 +08:00
JJJYmmm
d261223d24
model: add support for qwen3vl series ( #16780 )
...
* support qwen3vl series.
Co-authored-by: Thireus ☠ <Thireus@users.noreply.github.com>
Co-authored-by: yairpatch <yairpatch@users.noreply.github.com>
Co-authored-by: LETS-BEE <LETS-BEE@users.noreply.github.com>
* bugfix: fix the arch check for qwen3vl-moe.
* use build_ffn
* optimize deepstack structure
* optimize deepstack feature saving
* Revert "optimize deepstack feature saving" for temporal fix
This reverts commit f321b9fdf13e59527408152e73b1071e19a87e71.
* code clean
* use fused qkv in clip
* clean up / rm is_deepstack_layers for simplification
* add test model
* move test model to "big" section
* fix imrope check
* remove trailing whitespace
* fix rope fail
* metal : add imrope support
* add imrope support for sycl
* vulkan: add imrope w/o check
* fix vulkan
* webgpu: add imrope w/o check
* Update gguf-py/gguf/tensor_mapping.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* fix tensor mapping
---------
Co-authored-by: Thireus ☠ <Thireus@users.noreply.github.com>
Co-authored-by: yairpatch <yairpatch@users.noreply.github.com>
Co-authored-by: LETS-BEE <LETS-BEE@users.noreply.github.com>
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-10-30 16:19:14 +01:00
Tianyue-Zhao
bacddc049a
model: Add support for CogVLM model ( #15002 )
...
* Added GGUF mappings for CogVLM model
* Add tensor mapping for CogVLM visual encoder
* Add CogVLM to conversion script, no vision part yet
* Added CogVLM vision model to conversion script
* Add graph for CogVLM CLIP model
* Add graph for CogVLM
* Fixes for CogVLM. Now compiles.
* Model now runs
* Fixes for cogvlm graph
* Account for graph context change after rebase
* Changes for whitespace
* Changes in convert script according to comments
* Switch CogVLM LLM graph to merged QKV tensor
* Use rope_type variable instead of direct definition
* Change CogVLM CLIP encoder to use SWIGLU
* Switch CogVLM CLIP to use merged QKV
* Apply rebase edits and remove ggml_cont call that is now unnecessary
* clean up
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2025-10-30 12:18:50 +01:00
Concedo
16cbe9f24e
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# CODEOWNERS
# docs/ops.md
# docs/ops/SYCL.csv
# examples/embedding/README.md
# ggml/src/ggml-cann/aclnn_ops.cpp
# ggml/src/ggml-cann/ggml-cann.cpp
# ggml/src/ggml-sycl/backend.hpp
# ggml/src/ggml-sycl/ggml-sycl.cpp
# ggml/src/ggml-sycl/norm.cpp
# ggml/src/ggml-sycl/norm.hpp
# scripts/snapdragon/adb/run-bench.sh
# scripts/snapdragon/adb/run-cli.sh
# src/llama-batch.cpp
# tests/test-backend-ops.cpp
# tests/test-chat.cpp
# tests/test-json-schema-to-grammar.cpp
# tools/llama-bench/README.md
2025-10-30 13:44:46 +08:00
Georgi Gerganov
85a7d8677b
memory : remove KV cache size padding ( #16812 )
...
* memory : remove KV cache size padding
* cont : restore padding for n_kv tensor shape
* server : use slot context size instead of training context size
* server : simplify context limit logic
2025-10-28 20:19:44 +02:00
Johannes Gäßler
7a0e900e36
llama: consistent ctx <-> buf order for KV cache ( #16746 )
2025-10-28 11:23:54 +01:00
Concedo
eaee2110c3
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# README.md
# ggml/src/ggml-sycl/backend.hpp
# ggml/src/ggml-sycl/ggml-sycl.cpp
# tests/test-backend-ops.cpp
2025-10-27 22:36:19 +08:00
Johannes Gäßler
945501f5ea
llama: fix leaked buffers for mmap + split files ( #16765 )
Check Pre-Tokenizer Hashes / pre-tokenizer-hashes (push) Has been cancelled
Python check requirements.txt / check-requirements (push) Has been cancelled
Python Type-Check / pyright type-check (push) Has been cancelled
2025-10-27 09:17:31 +01:00
Sigbjørn Skjæret
73a48c9790
convert : enable expert group selection for all models with it ( #16691 )
2025-10-26 17:21:23 +01:00
Sigbjørn Skjæret
7cce4f8158
model : set res->t_embd in SmallThinker models ( #16782 )
2025-10-26 16:08:52 +01:00