Concedo
d06700687f
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .devops/rocm.Dockerfile
# .github/workflows/release.yml
# CMakeLists.txt
# ggml/src/ggml-cuda/common.cuh
# scripts/sync_vendor.py
# tests/test-chat.cpp
2026-02-22 09:33:13 +08:00
crsawyer
07968d53e4
fix: UI single model selection in router mode ( #19767 )
2026-02-21 09:28:39 +01:00
ddh0
492bc31978
quantize : add --dry-run option ( #19526 )
...
* clean slate for branch
* use 6 characters for tensor dims
* add --dry-run to llama-quantize
* use 6 characters for tensor dims (cont.)
* no need to re-calculate ggml_nbytes for tensor
* fix indent
* show model and quant BPW when quant completes
* add example to --help
* new function `tensor_requires_imatrix`, add courtesy warning about imatrix
* missing __func__, move imatrix flag set
* logic error
* fixup tensor_requires_imatrix
* add missing `GGML_TYPE`s
* simplify and rename `tensor_type_requires_imatrix`
* simplify for style
* add back Q2_K edge case for imatrix
* guard ftype imatrix warning
* comment ref #12557
* remove per @compilade
* remove unused `params` parameter
* move `bool dry_run` per GG
* move `bool dry_run` per GG
* Update src/llama-quant.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Update src/llama-quant.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Update src/llama-quant.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-02-20 09:20:16 +01:00
Concedo
e626de2430
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# docs/ops.md
# docs/ops/WebGPU.csv
# embd_res/templates/stepfun-ai-Step-3.5-Flash.jinja
# ggml/src/ggml-webgpu/ggml-webgpu.cpp
# ggml/src/ggml-webgpu/wgsl-shaders/unary.wgsl
# src/CMakeLists.txt
# tests/test-backend-ops.cpp
# tests/test-chat.cpp
# tools/mtmd/CMakeLists.txt
2026-02-20 15:16:26 +08:00
Concedo
07c45ced56
Merge commit ' c78e682245' into concedo_experimental
...
# Conflicts:
# src/models/qwen35.cpp
# src/models/qwen35moe.cpp
2026-02-20 14:41:32 +08:00
Concedo
9eb9e4eb83
Merge commit ' 8a70973557' into concedo_experimental
...
# Conflicts:
# docs/backend/CANN.md
# docs/backend/SYCL.md
# examples/model-conversion/scripts/utils/tensor-info.py
# ggml/src/ggml-opencl/ggml-opencl.cpp
# ggml/src/ggml-opencl/kernels/expm1.cl
# ggml/src/ggml-opencl/kernels/mean.cl
# ggml/src/ggml-opencl/kernels/softplus.cl
# ggml/src/ggml-opencl/kernels/sum_rows.cl
# ggml/src/ggml-webgpu/ggml-webgpu-shader-lib.hpp
# ggml/src/ggml-webgpu/ggml-webgpu.cpp
# ggml/src/ggml-webgpu/wgsl-shaders/common_decls.tmpl
# ggml/src/ggml-webgpu/wgsl-shaders/embed_wgsl.py
# ggml/src/ggml-webgpu/wgsl-shaders/get_rows.wgsl
# ggml/src/ggml-webgpu/wgsl-shaders/mul_mat.wgsl
# ggml/src/ggml-webgpu/wgsl-shaders/mul_mat_decls.tmpl
# ggml/src/ggml-webgpu/wgsl-shaders/mul_mat_reg_tile.wgsl
# ggml/src/ggml-webgpu/wgsl-shaders/mul_mat_subgroup_matrix.wgsl
# ggml/src/ggml-webgpu/wgsl-shaders/mul_mat_vec.wgsl
# ggml/src/ggml-webgpu/wgsl-shaders/scale.wgsl
# tools/server/webui/src/lib/components/app/chat/ChatScreen/ChatScreen.svelte
2026-02-20 14:36:49 +08:00
crsawyer
10b26ee23a
WebUI hide models in router mode ( #19374 )
2026-02-19 22:53:42 +01:00
megemini
237958db33
model: Add PaddleOCR-VL model support ( #18825 )
...
* support PaddleOCR-VL
* clip: update PaddleOCR model loader parameters to prevent OOM during warmup
* [update] add paddleocr vl text model instead of ernie4.5
* [update] restore change of minicpmv
* [update] format
* [update] format
* [update] positions and patch merge permute
* [update] mtmd_decode_use_mrope for paddleocr
* [update] image min/max pixels
* [update] remove set_limit_image_tokens
* upate: preprocess without padding
* clean up
* Update convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Update convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-02-19 17:05:25 +01:00
Saba Fallah
e6267a9359
mtmd: build_attn modified, flash_attn on/off via ctx_params ( #19729 )
2026-02-19 13:50:29 +01:00
Tarek Dakhran
c5897995a7
mtmd : chat : Fix extra \n between text and media marker ( #19595 )
...
* mtmd : chat : Fix extra \n between text and media marker
Thanks to @tugot17 for detecting and reporting the issue.
For vision models (e.g. LFM2.5-VL-1.6B and Qwen/Qwen3-VL-4B-Instruct) `llama-mtmd-cli` produces identical output to HF implementation.
However `llama-server` doesn't. I traced it down to extra newline
inserted after `<__media__>`.
This happens in `to_json_oaicompat`, that treats media markers as text
and joins all parts with `\n` separator.
PR introduces new type `media_marker` and uses it for media markers.
Extra logic is added to prevent insertion of newlines before and after
media markers.
With this change number of input tokens is identical to HF
implementation and as a result the output is also identical.
I explored other ways to address the issue
* remove completely `\n` between text parts in `to_json_oaicompat`
* merge text messages in server-common.cpp before sending them to `to_json_oaicompat`
Please propose alternative ways of fixing this issue.
* Refactor to use explicite per type ifs
* Update common/chat.cpp
Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
* Update common_chat_templates_apply_legacy
---------
Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
2026-02-19 12:18:57 +01:00
Aleksander Grygier
03fd9d3bb4
webui: Fix Attachments not being included in completion request ( #19731 )
...
* fix: Add missing argument
* chore: update webui build output
2026-02-19 10:27:38 +01:00
matteo
b55dcdef5d
server: save generated text for the /slots endpoint (for LLAMA_SERVER_SLOTS_DEBUG=1) ( #19622 )
...
* save generated text for the /slots endpoint
* update debug_generated_text only when LLAMA_SERVER_SLOTS_DEBUG > 0
* Apply suggestions from code review
---------
Co-authored-by: Matteo <matteo@matteo>
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>
2026-02-18 18:53:37 +01:00
Xuan-Son Nguyen
eeef3cfced
model: support GLM-OCR ( #19677 )
...
* model: support GLM-OCR
* Update convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-02-18 17:51:40 +01:00
Aleksander Grygier
ea003229d3
Pre-MCP UI and architecture cleanup ( #19689 )
2026-02-18 12:02:02 +01:00
Aleksander Grygier
afa6bfe4f7
Pre-MCP UI and architecture cleanup ( #19685 )
...
* webui: extract non-MCP changes from mcp-mvp review split
* webui: extract additional pre-MCP UI and architecture cleanup
* chore: update webui build output
2026-02-17 13:47:45 +01:00
Adrien Gallouët
ae46a61e41
build : link ws2_32 as PUBLIC on Windows ( #19666 )
...
Signed-off-by: Adrien Gallouët <adrien@gallouet.fr>
2026-02-17 08:37:07 +01:00
AesSedai
d612901116
perplexity: add proper batching ( #19661 )
2026-02-16 18:44:44 +02:00
Concedo
72f7e01b27
Merge commit ' 01d8eaa28d' into concedo_experimental
...
# Conflicts:
# build-xcframework.sh
# scripts/sync_vendor.py
# tests/test-backend-ops.cpp
# tools/mtmd/CMakeLists.txt
# tools/rpc/rpc-server.cpp
2026-02-16 15:36:59 +08:00
Adrien Gallouët
9e118b97c4
build : remove LLAMA_HTTPLIB option ( #19623 )
...
This option was introduced as a workaround because cpp-httplib could not
build on visionOS. Since it has been fixed and now compiles on all platforms,
we can remove it and simplify many things.
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-02-15 15:38:50 +01:00
Anav Prasad
01d8eaa28d
mtmd : Add Nemotron Nano 12B v2 VL support ( #19547 )
...
* nemotron nano v2 vlm support added
* simplified code; addressed reviews
* pre-downsample position embeddings during GGUF conversion for fixed input size
2026-02-14 14:07:00 +01:00
iMil
badba89320
NetBSD build support ( #19589 )
2026-02-14 09:47:01 +01:00
Aleksander Grygier
baa12f3831
webui: Architecture and UI improvements ( #19596 )
2026-02-14 09:06:41 +01:00
Concedo
45dc155530
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .github/ISSUE_TEMPLATE/010-bug-compilation.yml
# .github/ISSUE_TEMPLATE/011-bug-results.yml
# AGENTS.md
# SECURITY.md
# ggml/src/ggml-hexagon/htp/flash-attn-ops.c
# ggml/src/ggml-hexagon/htp/main.c
# ggml/src/ggml-opencl/CMakeLists.txt
# ggml/src/ggml-opencl/ggml-opencl.cpp
# ggml/src/ggml-opencl/kernels/cvt.cl
# scripts/sync_vendor.py
# src/unicode.cpp
# tests/test-backend-ops.cpp
# tools/cli/cli.cpp
2026-02-14 12:44:16 +08:00
Sigbjørn Skjæret
b2ecc0cdb4
support --verbose-prompt ( #19576 )
2026-02-13 12:49:10 +01:00
Aleksander Grygier
5174d7206f
webui: UI and routing fixes ( #19586 )
...
* chore: update webui build output
* chore: update webui build output
* fix: Scroll issues in DropdownMenuSearchable
* webui: fix redirect to root ignoring base path
* fix: Word wrapping
* fix: remove obsolete modality UI tests causing CI failures
- Remove VisionModality/AudioModality test stories
- Remove mockServerProps usage and imports
- Simplify Default test (remove dropdown interaction checks)
- Simplify FileAttachments test (remove mocks)
* feat: Improve formatting performance time
---------
Co-authored-by: Pascal <admin@serveurperso.com>
2026-02-13 12:31:00 +01:00
Concedo
bff3fd3e34
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# common/common.cpp
# docs/backend/snapdragon/README.md
# ggml/src/ggml-hexagon/htp/htp-ops.h
# ggml/src/ggml-hexagon/htp/matmul-ops.c
# ggml/src/ggml-opencl/CMakeLists.txt
# ggml/src/ggml-opencl/ggml-opencl.cpp
# scripts/pr2wt.sh
# tests/test-backend-ops.cpp
# tools/server/README.md
2026-02-13 14:00:45 +08:00
Concedo
55524e160b
temp merge, not working
2026-02-13 12:11:26 +08:00
Aleksander Grygier
4c61875bf8
webui: Add switcher to Chat Message UI to show raw LLM output ( #19571 )
2026-02-12 19:55:51 +01:00
Aleksander Grygier
4d688f9ebb
(webui) FEATURE: Enable adding or injecting System Message into chat ( #19556 )
...
* feat: Enable adding System Prompt per-chat
* fix: Save draft message in Chat Form when adding System Prompt from new chat view
* fix: Proper system message deletion logic
* chore: Formatting
* chore: update webui build output
2026-02-12 13:56:08 +01:00
Aleksander Grygier
f486ce9f30
(webui) REFACTOR: UI primitives and polish ( #19551 )
...
* webui: UI primitives and polish (non-MCP)
* chore: update webui build output
2026-02-12 12:21:00 +01:00
Aleksander Grygier
38adc7d469
WebUI Architecture Cleanup ( #19541 )
...
* webui: architecture foundation (non-MCP core refactors)
* chore: update webui build output
2026-02-12 11:22:27 +01:00
Concedo
261d78eaaa
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# CMakeLists.txt
# README.md
# docs/speculative.md
# ggml/src/ggml-cann/aclnn_ops.cpp
# ggml/src/ggml-cann/ggml-cann.cpp
# tests/CMakeLists.txt
# tests/test-backend-ops.cpp
# tools/mtmd/clip.cpp
2026-02-12 18:05:20 +08:00
RichardScottOZ
fa16e517a3
server : fix typo in README.md for features list ( #19510 )
...
extra l for full
2026-02-12 08:56:25 +01:00
AesSedai
e463bbdf65
model: Add Kimi-K2.5 support ( #19170 )
...
* Move dequant_model to after the text_config merge
Add new kimi-k2.5 keys to mtmd convert
Update V_MMPROJ tensor mapping for new mm_projector.proj keys
Update V_M_IMP_NORM for new mm_projector.pre_norm key
* Fix a couple of oversights
* Add image support for Kimi-K2.5
* Revert changes to KimiVLForConditionalGeneration
* Fix an assert crash
* Fix permute swapping w / h on accident
* Kimi-K2.5: Use merged QKV for vision
* Kimi-K2.5: pre-convert vision QK to use build_rope_2d
* Kimi-K2.5: support non-interleaved rope for vision
* Kimi-K2.5: fix min / max pixel
* Kimi-K2.5: remove v/o permutes, unnecessary
* Kimi-K2.5: update permute name to match
* Update convert_hf_to_gguf.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Kimi-K2.5: replace build_rope_2d ggml_cont with ggml_view_3d pointers
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-02-11 16:47:30 +01:00
Georgi Gerganov
6d95707827
model : fix wavtokenizer embedding notions ( #19479 )
2026-02-11 07:52:20 +02:00
JJJYmmm
fc0fe40049
models : support qwen3.5 series ( #19468 )
...
* support qwen3.5 series
* remove deepstack for now, and some code clean
* code clean
* add FULL_ATTENTION_INTERVAL metadata
* code clean
* reorder v heads for linear attention to avoid expensive interleaved repeat
2026-02-10 18:00:26 +02:00
Daniel Bevenius
66d403c480
tts : fix typos in README.md [no ci] ( #19463 )
2026-02-10 07:30:41 +01:00
Tarek Dakhran
262364e31d
mtmd: Implement tiling for LFM2-VL ( #19454 )
2026-02-09 17:30:32 +01:00
손희준
820ebfa6f4
Server: log when converting requests to chat completions format ( #19457 )
...
* Log converting requests
* Print as debug instead of info [no ci]
---------
Co-authored-by: openingnow <>
2026-02-09 16:22:57 +01:00
Sascha Rogmann
292f6908cd
spec : remove check rate ( #19377 )
...
* spec: remove parameter spec-ngram-check-rate
* spec : renamed statistics vars
* spec : add n_call_begin, n_call_accept
* spec : don't enable key-map-stats
2026-02-09 15:30:50 +02:00
Concedo
757b293ac9
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .github/workflows/build.yml
# .github/workflows/server-webui.yml
# .github/workflows/server.yml
# tools/rpc/rpc-server.cpp
2026-02-09 00:33:11 +08:00
Adrien Gallouët
5fa1c190d9
rpc : update from common.cpp ( #19400 )
...
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-02-08 09:06:45 +01:00
Georgi Gerganov
eb449cdfa4
server : improve context checkpoint logic ( #19408 )
2026-02-08 09:40:04 +02:00
ddh0
5999b50eb0
llama-quantize : cleanup --help output ( #19317 )
...
* cleanup `llama-quantize --help` output
some much needed TLC
* remove future argument
oops, spoiler
* cleanup of cleanup
2026-02-08 09:22:38 +02:00
Concedo
a0a78dacc4
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .github/workflows/build.yml
# docs/ops.md
# docs/ops/SYCL.csv
# ggml/src/ggml-sycl/element_wise.cpp
# ggml/src/ggml-sycl/ggml-sycl.cpp
# ggml/src/ggml-webgpu/ggml-webgpu-shader-lib.hpp
# ggml/src/ggml-webgpu/ggml-webgpu.cpp
# pyproject.toml
# requirements/requirements-convert_legacy_llama.txt
# src/CMakeLists.txt
# src/llama-vocab.cpp
# tests/test-backend-ops.cpp
2026-02-07 15:54:02 +08:00
Georgi Gerganov
dfde5993ea
common : add common_speculative_is_compat() ( #19270 )
...
* llama : add llama_memory_can_rm_suffix()
* Revert "llama : add llama_memory_can_rm_suffix()"
This reverts commit d30e59b62a15ef4266a6503e3f4eba770aec001b.
* spec : check if the target context is compatible for spec decoding
2026-02-06 16:47:22 +02:00
Concedo
1f803ae27b
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .github/workflows/server.yml
# CMakeLists.txt
# cmake/common.cmake
# ggml/src/ggml-virtgpu/apir_cs_ggml-rpc-front.cpp
# ggml/src/ggml-virtgpu/backend/backend-dispatched-backend.cpp
# ggml/src/ggml-virtgpu/backend/backend-dispatched-buffer-type.cpp
# ggml/src/ggml-virtgpu/backend/backend-dispatched-buffer.cpp
# ggml/src/ggml-virtgpu/backend/backend-dispatched-device.cpp
# ggml/src/ggml-virtgpu/backend/backend-dispatched.cpp
# ggml/src/ggml-virtgpu/backend/backend-dispatched.gen.h
# ggml/src/ggml-virtgpu/backend/backend-dispatched.h
# ggml/src/ggml-virtgpu/backend/backend.cpp
# ggml/src/ggml-virtgpu/backend/shared/apir_cs.h
# ggml/src/ggml-virtgpu/backend/shared/apir_cs_ggml.h
# ggml/src/ggml-virtgpu/ggml-backend-buffer-type.cpp
# ggml/src/ggml-virtgpu/ggml-backend-device.cpp
# ggml/src/ggml-virtgpu/ggml-backend-reg.cpp
# ggml/src/ggml-virtgpu/ggml-remoting.h
# ggml/src/ggml-virtgpu/ggmlremoting_functions.yaml
# ggml/src/ggml-virtgpu/regenerate_remoting.py
# ggml/src/ggml-virtgpu/virtgpu-forward-backend.cpp
# ggml/src/ggml-virtgpu/virtgpu-forward-buffer-type.cpp
# ggml/src/ggml-virtgpu/virtgpu-forward-buffer.cpp
# ggml/src/ggml-virtgpu/virtgpu-forward-device.cpp
# ggml/src/ggml-virtgpu/virtgpu-forward-impl.h
# ggml/src/ggml-virtgpu/virtgpu-forward.gen.h
# ggml/src/ggml-virtgpu/virtgpu-shm.cpp
# ggml/src/ggml-virtgpu/virtgpu.cpp
# ggml/src/ggml-virtgpu/virtgpu.h
2026-02-04 16:21:06 +08:00
Daniel Bevenius
25f40ca65f
completion : simplify batch (embd) processing ( #19286 )
...
Python Type-Check / pyright type-check (push) Waiting to run
* completion : simplify batch (embd) processing
This commit simplifies the processing of embd by removing the for loop
that currently exists which uses params.n_batch as its increment. This
commit also removes the clamping of n_eval as the size of embd is always
at most the size of params.n_batch.
The motivation is to clarify the code as it is currently a little
confusing when looking at this for loop in isolation and thinking that
it can process multiple batches.
* add an assert to verify n_eval is not greater than n_batch
2026-02-04 05:43:28 +01:00
Concedo
7b393fa487
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .github/workflows/build.yml
# AUTHORS
# ci/run.sh
# docs/backend/SYCL.md
# docs/build.md
# docs/multimodal/minicpmo2.6.md
# docs/multimodal/minicpmo4.0.md
# docs/multimodal/minicpmv2.5.md
# docs/multimodal/minicpmv2.6.md
# docs/multimodal/minicpmv4.0.md
# docs/multimodal/minicpmv4.5.md
# docs/ops.md
# docs/ops/SYCL.csv
# docs/speculative.md
# examples/deprecation-warning/README.md
# examples/deprecation-warning/deprecation-warning.cpp
# examples/model-conversion/Makefile
# examples/model-conversion/scripts/causal/convert-model.sh
# ggml/include/ggml-cann.h
# ggml/src/ggml-cann/acl_tensor.cpp
# ggml/src/ggml-cann/acl_tensor.h
# ggml/src/ggml-cann/aclnn_ops.cpp
# ggml/src/ggml-cann/aclnn_ops.h
# ggml/src/ggml-cann/common.h
# ggml/src/ggml-cann/ggml-cann.cpp
# ggml/src/ggml-metal/CMakeLists.txt
# ggml/src/ggml-opencl/ggml-opencl.cpp
# ggml/src/ggml-opencl/kernels/concat.cl
# ggml/src/ggml-opencl/kernels/repeat.cl
# ggml/src/ggml-opencl/kernels/scale.cl
# ggml/src/ggml-opencl/kernels/tanh.cl
# ggml/src/ggml-sycl/CMakeLists.txt
# ggml/src/ggml-sycl/dpct/helper.hpp
# ggml/src/ggml-sycl/ggml-sycl.cpp
# ggml/src/ggml-sycl/outprod.cpp
# ggml/src/ggml-sycl/rope.cpp
# ggml/src/ggml-sycl/wkv.cpp
# src/llama-vocab.cpp
# tests/test-autorelease.cpp
# tests/test-backend-ops.cpp
# tools/cvector-generator/pca.hpp
# tools/export-lora/export-lora.cpp
# tools/perplexity/README.md
2026-02-03 19:00:42 +08:00
Xuan-Son Nguyen
07a7412a3b
mtmd: add min/max pixels gguf metadata ( #19273 )
2026-02-02 20:59:06 +01:00