Concedo
28091dec43
pipeline parallel default enable
2026-01-21 17:57:41 +08:00
Concedo
cdd6578a9a
esrgan added
2026-01-20 22:10:37 +08:00
Concedo
c9c15749e0
wip on adding esrgan upscaling
2026-01-20 00:35:35 +08:00
Concedo
393791496d
flux 2 taesd (+1 squashed commits)
...
Squashed commits:
[adfc3f3a2] flux 2 taesd
2026-01-19 23:47:16 +08:00
Concedo
d827494f17
fix text for vae (+1 squashed commits)
...
Squashed commits:
[793caed19] fix text
2026-01-19 01:50:07 +08:00
Concedo
70f92b12f8
sdxs clamp steps and cfg
2026-01-19 01:07:27 +08:00
Wagner Bruna
10851f223d
sd: sync to master-473-9565c7f ( #1927 )
...
* sd: sync to master-473-9565c7f
* sd: add support for flux2 klein
2026-01-19 01:04:34 +08:00
Concedo
7f618454ff
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .github/labeler.yml
# CODEOWNERS
# docs/backend/OPENCL.md
# docs/ops.md
# docs/ops/CANN.csv
# docs/ops/WebGPU.csv
# ggml/src/ggml-blas/CMakeLists.txt
# ggml/src/ggml-opencl/kernels/mul_mv_q6_k.cl
# ggml/src/ggml-webgpu/ggml-webgpu-shader-lib.hpp
# ggml/src/ggml-webgpu/ggml-webgpu.cpp
# ggml/src/ggml-webgpu/wgsl-shaders/cpy.tmpl.wgsl
# ggml/src/ggml-webgpu/wgsl-shaders/set_rows.wgsl
# tests/test-backend-ops.cpp
2026-01-18 23:24:29 +08:00
Francisco Herrera
293a1565dc
docs: add linux to index ( #18907 )
2026-01-18 18:03:35 +08:00
Concedo
3ba3d15fe3
fixed a typo
2026-01-18 16:41:34 +08:00
Llama
95ebfdcde8
Add token ids to logprob data returned by the API ( #1928 )
...
Previously, logprobs only contained the token string
and byte data, as well as the log probability itself.
For workflows that require the token id, translating
from the token bytes to the token id is potentially
costly and unreliable. It is simple and inexpensive
to expose the numeric token ids directly instead.
2026-01-18 16:30:46 +08:00
Concedo
7b4517c2fe
embeddings memory usage regression fix
2026-01-18 16:26:52 +08:00
Xuan-Son Nguyen
fe44d35574
tests : add test-jinja -py option for cross-checking ( #18906 )
...
* tests : add test-jinja -py option or cross-checking
* Update tests/test-jinja.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* fix + add source
* SandboxedEnvironment
* fix array.map case
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-01-18 08:14:27 +01:00
Concedo
3816391a74
increase logprobs returned to 10
2026-01-18 11:13:42 +08:00
Concedo
22ddad81b9
device override set in gui
2026-01-18 10:54:20 +08:00
Sigbjørn Skjæret
bbcdac0189
jinja : fix object item order (and properly implement dictsort) ( #18904 )
...
* fix object item order
* as_ordered_object
* copy whole object
2026-01-18 03:40:06 +01:00
Sigbjørn Skjæret
d03c45c9c5
jinja : attribute support for join, map and sort ( #18883 )
...
* support negative array index and default value
* attribute support (int and str) for join, map and sort
* add tests
* update CODEOWNERS
* improve fixme sorting comment
2026-01-18 02:53:01 +01:00
Sigbjørn Skjæret
10c98cbdf6
jinja : add missing tojson filter for bool ( #18900 )
...
* add missing tojson for bool
* add more literal tests
2026-01-18 01:05:09 +01:00
Sigbjørn Skjæret
420960ab92
jinja : fix lexing of float literals with sign ( #18901 )
...
* fix lexing of float literals with sign
* add test
* consume_numeric
2026-01-18 00:57:51 +01:00
Xuan-Son Nguyen
f55b033ae6
jinja: correct member access rule ( #18905 )
2026-01-18 00:48:55 +01:00
lhez
d1b4757ded
opencl: fix q6_K mv for m=1 ( #18893 )
2026-01-17 13:50:32 -08:00
Sigbjørn Skjæret
57c0beaed0
ci : add label for jinja changes ( #18903 )
2026-01-17 21:52:02 +01:00
Georgi Gerganov
2fbde785bc
kv-cache : optimize KQ mask construction ( #18842 )
...
* kv-cache : optimize KQ mask construction
* cont : add explanation + improve
* cont : fix
2026-01-17 15:42:42 +02:00
Concedo
ac3392d5da
updated lite
2026-01-17 21:40:53 +08:00
Concedo
89a205ecc7
bump version
2026-01-17 19:09:14 +08:00
Concedo
62bea5ef4f
allow overriding the devices directly
2026-01-17 19:08:06 +08:00
Concedo
21e6ccb8cb
updated lite
2026-01-17 15:59:53 +08:00
Concedo
d2b2224b0d
vulkan env var always take priority
2026-01-17 10:34:45 +08:00
Concedo
8855a7f52b
Merge commit ' c945aaaef2' into concedo_experimental
...
# Conflicts:
# .devops/cann.Dockerfile
# .github/workflows/build.yml
# .github/workflows/release.yml
# README.md
# common/CMakeLists.txt
# common/chat.cpp
# docs/function-calling.md
# ggml/src/ggml-cann/aclnn_ops.cpp
# ggml/src/ggml-cann/aclnn_ops.h
# ggml/src/ggml-cann/common.h
# ggml/src/ggml-cann/ggml-cann.cpp
# models/templates/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16.jinja
# scripts/sync_vendor.py
# tests/CMakeLists.txt
# tests/peg-parser/tests.h
# tests/test-chat-peg-parser.cpp
# tests/test-chat-template.cpp
# tests/test-chat.cpp
# tests/testing.h
# tools/llama-bench/llama-bench.cpp
2026-01-17 10:24:03 +08:00
Reese Levine
a89002f07b
ggml webgpu: support for backend sampling ( #18880 )
...
Update Operations Documentation / update-ops-docs (push) Has been cancelled
* ggml webgpu: add SOFTPLUS unary operator
Implements SOFTPLUS (log(1 + exp(x))) with f16/f32 support. Uses f32
precision for intermediate calculations to prevent f16 overflow.
* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support
* Follow Vulkan backend numerical stability pattern
* ggml webgpu: add EXPM1 unary operator
Implements EXPM1 (exp(x) - 1) with f16/f32 support.
* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support
* ggml webgpu: add FLOOR unary operator
Implements FLOOR (rounds down to nearest integer) with f16/f32 support.
* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support
* ggml webgpu: add CEIL unary operator
Implements CEIL (rounds up to nearest integer) with f16/f32 support.
* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support
* ggml webgpu: add ROUND unary operator
Implements ROUND (rounds to nearest integer) with f16/f32 support.
* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support
* ggml webgpu: add TRUNC unary operator
Implements TRUNC (truncates towards zero) with f16/f32 support.
* Add shader implementation and 4 variants (f32/f16, inplace/non-inplace)
* Register pipelines and device support
* docs : update WebGPU support for unary operators (FLOOR, CEIL, ROUND, TRUNC, EXPM1, SOFTPLUS)
* Updates to webgpu get_memory
* Add argmax
* Add argmax,cumsum,sum,sum_rows
* Add necessary CPY/GET_ROWS operators
* Support for argsort using multi-pass strategy
* Update set_rows for i32 indices, move to pre-wgsl
* Port unary operators to pre-wgsl and support FILL
* Implement PAD
* Add support for top-k
* clean up, scope pipeline init mutex
* fix newline
* Add support for log
* Update LOG for better precision, and ops doc
---------
Co-authored-by: Abhijit Ramesh <abhijitramesh2k@gmail.com>
2026-01-16 16:12:43 -08:00
Concedo
d15bd212c5
cleanup
2026-01-17 00:57:33 +08:00
Concedo
0d43bdc46d
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# examples/batched/batched.cpp
# ggml/src/ggml-opencl/CMakeLists.txt
# ggml/src/ggml-opencl/ggml-opencl.cpp
# src/llama-context.cpp
# tools/cli/README.md
# tools/completion/README.md
# tools/server/README.md
2026-01-17 00:41:28 +08:00
Concedo
a5204d2363
fixed mcp command location
2026-01-17 00:09:46 +08:00
Thore Koritzius
388ce82241
ggml : extend ggml_pool_1d + metal ( #16429 )
...
Update Operations Documentation / update-ops-docs (push) Waiting to run
Python Type-Check / pyright type-check (push) Has been cancelled
* chore: resolve conflicts
* feat: ggml metal impl
* fix: ggml_metal_kargs_pool_1d struct
* fix: require contiguous input
* chore: test pool_1d
* chore: limit pool1d test cases to p0=0 and s0=k0 to conform with asserts
* chore: add p0 and s0 to testing
* fix: allow padding for cpu and metal
* Update ggml/src/ggml-metal/ggml-metal.metal
* fix: correct single-threaded loop
* ggml : cleanup
* tests : add ne[1] != 1 tests
* fix: ne[1] handling in np
* cont : fixes
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-01-16 16:59:56 +02:00
Concedo
22af5f1250
Merge commit ' 2a13180100' into concedo_experimental
...
# Conflicts:
# .devops/cann.Dockerfile
# .devops/cpu.Dockerfile
# .devops/cuda-new.Dockerfile
# .devops/cuda.Dockerfile
# .devops/intel.Dockerfile
# .devops/llama-cli-cann.Dockerfile
# .devops/musa.Dockerfile
# .devops/nix/package.nix
# .devops/rocm.Dockerfile
# .devops/s390x.Dockerfile
# .devops/vulkan.Dockerfile
# .github/workflows/build-cmake-pkg.yml
# .github/workflows/build-linux-cross.yml
# .github/workflows/build.yml
# .github/workflows/copilot-setup-steps.yml
# .github/workflows/release.yml
# .github/workflows/server-webui.yml
# .github/workflows/server.yml
# CMakeLists.txt
# README.md
# build-xcframework.sh
# ci/run.sh
# cmake/common.cmake
# common/CMakeLists.txt
# docs/backend/hexagon/CMakeUserPresets.json
# docs/backend/hexagon/README.md
# docs/build-riscv64-spacemit.md
# docs/build.md
# examples/debug/debug.cpp
# examples/eval-callback/CMakeLists.txt
# examples/eval-callback/eval-callback.cpp
# examples/llama.android/lib/build.gradle.kts
# examples/sycl/build.sh
# examples/sycl/win-build-sycl.bat
# ggml/src/ggml-hexagon/ggml-hexagon.cpp
# ggml/src/ggml-hexagon/htp/CMakeLists.txt
# ggml/src/ggml-hexagon/htp/act-ops.c
# ggml/src/ggml-hexagon/htp/binary-ops.c
# ggml/src/ggml-hexagon/htp/flash-attn-ops.c
# ggml/src/ggml-hexagon/htp/get-rows-ops.c
# ggml/src/ggml-hexagon/htp/hex-dma.c
# ggml/src/ggml-hexagon/htp/hex-dma.h
# ggml/src/ggml-hexagon/htp/htp-ctx.h
# ggml/src/ggml-hexagon/htp/htp-msg.h
# ggml/src/ggml-hexagon/htp/htp-ops.h
# ggml/src/ggml-hexagon/htp/hvx-utils.h
# ggml/src/ggml-hexagon/htp/main.c
# ggml/src/ggml-hexagon/htp/matmul-ops.c
# ggml/src/ggml-hexagon/htp/rope-ops.c
# ggml/src/ggml-hexagon/htp/set-rows-ops.c
# ggml/src/ggml-hexagon/htp/softmax-ops.c
# ggml/src/ggml-hexagon/htp/unary-ops.c
# ggml/src/ggml-hexagon/htp/worker-pool.c
# scripts/debug-test.sh
# scripts/serve-static.js
# scripts/snapdragon/adb/run-bench.sh
# scripts/snapdragon/adb/run-cli.sh
# scripts/snapdragon/adb/run-mtmd.sh
# scripts/snapdragon/adb/run-tool.sh
# scripts/tool_bench.py
# tests/CMakeLists.txt
# tests/test-backend-ops.cpp
# tools/mtmd/clip.cpp
2026-01-16 21:52:01 +08:00
hipudding
6ba6a3c76f
docs : update ops.md for CANN backend ( #18654 )
2026-01-16 13:32:17 +01:00
Perry Naseck
0802d4cfb3
ggml-blas: hide warnings from included BLAS headers ( #18818 )
...
* fix compile def openblas, blis for compat libs, nvpl compile def, warn if no blas vendor set
* ggml-blas: hide warnings from included BLAS headers
2026-01-16 13:38:25 +02:00
Concedo
a53da2f8bd
updated sdui from riztard
2026-01-16 18:31:30 +08:00
Tarek Dakhran
c945aaaef2
mtmd : Fix ASR for LFM2.5-Audio-1.5B ( #18876 )
2026-01-16 11:23:08 +01:00
Xuan-Son Nguyen
c15395f73c
common : implement new jinja template engine ( #18462 )
...
* jinja vm
* lexer
* add vm types
* demo
* clean up
* parser ok
* binary_expression::execute
* shadow naming
* bin ops works!
* fix map object
* add string builtins
* add more builtins
* wip
* use mk_val
* eval with is_user_input
* render gemma tmpl ok
* track input string even after transformations
* support binded functions
* keyword arguments and slicing array
* use shared_ptr for values
* add mk_stmt
* allow print source on exception
* fix negate test
* testing more templates
* mostly works
* add filter_statement
* allow func to access ctx
* add jinja-value.cpp
* impl global_from_json
* a lot of fixes
* more tests
* more fix, more tests
* more fixes
* rm workarounds
* demo: type inferrence
* add placeholder for tojson
* improve function args handling
* rm type inference
* no more std::regex
* trailing spaces
* make testing more flexible
* make output a bit cleaner
* (wip) redirect minja calls
* test: add --output
* fix crash on macro kwargs
* add minimal caps system
* add some workarounds
* rm caps_apply_workarounds
* get rid of preprocessing
* more fixes
* fix test-chat-template
* move test-chat-jinja into test-chat-template
* rm test-chat-jinja from cmake
* test-chat-template: use common
* fix build
* fix build (2)
* rename vm --> interpreter
* improve error reporting
* correct lstrip behavior
* add tojson
* more fixes
* disable tests for COMMON_CHAT_FORMAT_GENERIC
* make sure tojson output correct order
* add object.length
* fully functional selectattr / rejectattr
* improve error reporting
* more builtins added, more fixes
* create jinja rendering tests
* fix testing.h path
* adjust whitespace rules
* more fixes
* temporary disable test for ibm-granite
* r/lstrip behavior matched with hf.js
* minimax, glm4.5 ok
* add append and pop
* kimi-k2 ok
* test-chat passed
* fix lstrip_block
* add more jinja tests
* cast to unsigned char
* allow dict key to be numeric
* nemotron: rm windows newline
* tests ok
* fix test
* rename interpreter --> runtime
* fix build
* add more checks
* bring back generic format support
* fix Apertus
* [json.exception.out_of_range.403] key 'content' not found
* rm generic test
* refactor input marking
* add docs
* fix windows build
* clarify error message
* improved tests
* split/rsplit with maxsplit
* non-inverse maxsplit
forgot to change after simplifying
* implement separators for tojson and fix indent
* i like to move it move it
* rename null -- > none
* token::eof
* some nits + comments
* add exception classes for lexer and parser
* null -> none
* rename global -> env
* rm minja
* update docs
* docs: add input marking caveats
* imlement missing jinja-tests functions
* oops
* support trim filter with args, remove bogus to_json reference
* numerous argument fixes
* updated tests
* implement optional strip chars parameter
* use new chars parameter
* float filter also has default
* always leave at least one decimal in float string
* jinja : static analysis + header cleanup + minor fixes
* add fuzz test
* add string.cpp
* fix chat_template_kwargs
* nits
* fix build
* revert
* unrevert
sorry :)
* add fuzz func_args, refactor to be safer
* fix array.map()
* loosen ensure_vals max count condition, add not impl for map(int)
* hopefully fix windows
* check if empty first
* normalize newlines
---------
Co-authored-by: Alde Rojas <hello@alde.dev>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-01-16 11:22:06 +01:00
Concedo
a2c5b81b54
updated lite
2026-01-16 18:09:16 +08:00
Concedo
c332bb614c
better mcp error messages
2026-01-16 17:55:34 +08:00
Concedo
af7811dbe1
Merge commit ' 3e4bb29666' into concedo_experimental
...
# Conflicts:
# .github/workflows/build.yml
# ci/run.sh
# cmake/common.cmake
# examples/eval-callback/CMakeLists.txt
# examples/model-conversion/scripts/causal/modelcard.template
# ggml/src/ggml-cuda/fattn.cu
# ggml/src/ggml-metal/CMakeLists.txt
# src/CMakeLists.txt
# tests/CMakeLists.txt
# tests/test-arg-parser.cpp
2026-01-16 17:55:22 +08:00
Julius Tischbein
aa1dc3770a
Setting mmap and direct_io to false as default in llama-bench.cpp ( #18841 )
2026-01-16 09:46:51 +01:00
Raul Torres
4ea2eaac01
CANN: Remove unused ggml_cann_get_device function ( #18625 )
2026-01-16 16:34:09 +08:00
Chenguang Li
e20fa27a02
CANN: fix an issue where get_env was not fully renamed ( #18796 )
...
* CANN: fix an issue where get_env was not fully renamed
* ci: add cann with acl group
* ci: define use_acl_graph using GitHub Action
* ci: update cann dockerfile with acl graph
2026-01-16 16:24:04 +08:00
hipudding
baa4ba0aec
CANN: support gated linear attn ( #18653 )
...
* CANN: support gated linear attn
This change adds support for the GGML_OP_GATED_LINEAR_ATTN operator.
The feature was implemented by YushengZhao. Because the previous
submission was based on an outdated codebase, this PR was rebased to
merge.
Co-authored-by: YushengZhao <yusheng.chao@outlook.com>
Co-authored-by: hipudding <huafengchun@gmail.com>
* CANN: optimize OP gla
Optimize gla for high preformance
* Remove unused comments
---------
Co-authored-by: 赵禹昇 <2501112001@cninfer02.localdomain>
Co-authored-by: YushengZhao <yusheng.chao@outlook.com>
2026-01-16 16:18:49 +08:00
shaofeiqi
785a710085
OpenCL: add SOLVE_TRI op support ( #18846 )
Python Type-Check / pyright type-check (push) Waiting to run
2026-01-15 11:17:17 -08:00
Georgi Gerganov
6e7fc8a146
cuda : print less debug logs when disabling cuda graphs ( #18868 )
2026-01-15 20:53:01 +02:00
Georgi Gerganov
be8e3d9515
context : do not reserve scheduler for warmups ( #18867 )
2026-01-15 19:35:57 +02:00