Olivier Chafik
b6930ebc42
tool-call
: fix non-tool-calling grammar crashes w/ Qwen / Hermes 2 templates (#12900 )
...
* `tool-call`: don't call common_chat_params_init_hermes_2_pro when there aren't tools (or when there's a schema)
* test all chat formats w/o tools
2025-04-11 21:47:52 +02:00
yuri@FreeBSD
68b08f36d0
common : Define cache directory on FreeBSD ( #12892 )
2025-04-11 21:45:44 +02:00
Concedo
a56cc72bd0
added handling for remembering file paths, added gui option to disable zenity in GUI
2025-04-12 00:42:26 +08:00
henk717
f6b7fea979
zentk - folder select workaround ( #1478 )
...
* zentk - folder select workaround
* kcppt extention fix
2025-04-11 22:37:07 +08:00
Ewan Crawford
578754b315
sycl: Support sycl_ext_oneapi_limited_graph ( #12873 )
...
The current usage of the SYCL-Graph extension checks for
the `sycl_ext_oneapi_graph` device aspect. However, it is also
possible to support `sycl_ext_oneapi_limied_graph` devices that
don't support update
2025-04-11 15:32:14 +02:00
tastelikefeet
b2034c2b55
contrib: support modelscope community ( #12664 )
...
* support download from modelscope
* support login
* remove comments
* add arguments
* fix code
* fix win32
* test passed
* fix readme
* revert readme
* change to MODEL_ENDPOINT
* revert tail line
* fix readme
* refactor model endpoint
* remove blank line
* fix header
* fix as comments
* update comment
* update readme
---------
Co-authored-by: tastelikefeet <yuze.zyz@alibaba-inc/com>
2025-04-11 14:01:56 +02:00
henk717
8fd70f37bd
Zentk integration (Zenity/yad support) ( #1475 )
...
* Zentk integration (Zenity/yad support)
* Escape incompatible dependencies in zentk
* Properly clean env
2025-04-11 18:23:23 +08:00
Yuxuan Zhang
06bb53ad9b
llama-model : add Glm4Model implementation for GLM-4-0414 ( #12867 )
...
* GLM-4-0414
* use original one
* Using with tensor map
* fix bug
* change order
* change order
* format with flask8
2025-04-11 12:10:10 +02:00
Xuan-Son Nguyen
0c50923944
clip : use smart pointer ( ⚠️ breaking change) ( #12869 )
...
* clip : use smart pointers
* fix warmup
* add forward declaration
* misisng include
* fix include (2)
* composite
* simplify batch ptr
* fix conflict
2025-04-11 12:09:39 +02:00
Akarshan Biswas
fccf9cae83
SYCL: Add fp16 type support to unary op kernels ( #12788 )
...
* SYCL: Add fp16 support to some elementwise OP kernels
* remove comment
ggml-ci
* Use static_cast directly
* remove not needed cast from tanh
* Use static cast and remove unneeded castings
* Adjust device_support_op for unary OPs
* Use cast_data and typed_data struct to deduplicate casting code
2025-04-11 16:03:50 +08:00
Daniel Han
ec6c09d0fa
convert : Llama4 RoPE fix ( #12889 )
2025-04-11 09:49:09 +02:00
R0CKSTAR
8ac9f5d765
ci : Replace freediskspace to free_disk_space in docker.yml ( #12861 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-04-11 09:26:17 +02:00
Daniel Bevenius
12e9158f25
xcf : add check for visionos build version ( #12854 )
...
This commit adds a check for the visionos build version used with vtool
in build-xcframework.sh. The script now checks the Xcode version and
determines whether to use "xros" or "visionos" for the build version.
This commit also uses xcrun for the vtool so that the version of vtool
in xcode command line tools is used instead of the one in the system
path.
Refs: https://github.com/ggml-org/whisper.cpp/pull/2994#issuecomment-2773292223
2025-04-11 09:24:34 +02:00
Xuan-Son Nguyen
5b1f13cb64
convert : proper tensor name mapping for llama4 ( #12870 )
...
* Llama-4 mapping
* remove hacky renaming
---------
Co-authored-by: Daniel Han <danielhanchen@gmail.com>
2025-04-11 09:23:37 +02:00
Xuan-Son Nguyen
8b91d5355a
llama : correct rms norm for llama 4 ( #12882 )
2025-04-11 08:49:50 +02:00
Aaron Teo
0fed24c347
ggml: fix compilation error s390x ( #12848 )
...
* ggml: fixes #12846 compilation error
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>
* ggml: add documentation for code change
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>
* ggml: refactor to type-cast and update documentation
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>
* ggml: update documentation to provide full issue link
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>
---------
Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>
2025-04-11 08:20:07 +03:00
Georgi Gerganov
47ba87d0a4
sync : ggml
2025-04-11 00:17:47 +03:00
Georgi Gerganov
1d2b613445
tests : fix init order ( #0 )
...
ggml-ci
2025-04-11 00:17:47 +03:00
Georgi Gerganov
eb420e1148
sync : ggml
...
ggml-ci
2025-04-11 00:17:47 +03:00
cmdr2
cb79c2e7fa
ggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (ggml/1187)
...
fix #1186
2025-04-11 00:17:47 +03:00
Diego Devesa
fe92821ea9
ggml : add bilinear upscale support (ggml/1185)
2025-04-11 00:17:47 +03:00
Diego Devesa
459895c326
ggml : add more generic custom op, remove deprecated custom ops (ggml/1183)
...
* ggml : add more generic ggml_custom op
* ggml : remove deprecated custom ops
2025-04-11 00:17:47 +03:00
Georgi Gerganov
e4bf72d631
scripts : fix sync-ggml-am.sh
2025-04-11 00:17:47 +03:00
Xuan-Son Nguyen
8b9cc7cdd8
llava : introduce libmtmd ( #12849 )
...
* wip llava2
* migrated gemma3 to llava2
* add timings
* correct pre/postfix
* fix missing include
* fix compilation unused var warn
* update llava2_tokenize
* change name llava2 --> mtmd
* improve api
* refine helpers
* Update examples/llava/mtmd.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-04-10 22:57:16 +02:00
Xuan-Son Nguyen
64eda5deb9
convert : ability to lazy-load safetensors remotely without downloading to disk ( #12820 )
...
* gguf util : add SafetensorRemote
* fix style
* convert: add --remote option
* convert : allow using lazy remote tensors
It's a bit slow for now since everything is blocking and single-threaded.
* correct metadata.name
* small style fix
* support HF_TOKEN
* convert : use writeable buffer for remote lazy tensors
* convert : fix flake8 lint regarding lamdba assigment
* multithreaded download
* multithread: print debug
* fix style
* Revert "multithreaded download"
This reverts commit 42fc895ace385edc972ad819c76c704aeea61791.
* bring back _get_request_headers
---------
Co-authored-by: Francis Couture-Harpin <git@compilade.net>
2025-04-10 17:24:44 +02:00
askmyteapot
e2fefc373f
Update CMakeLists.txt - Fix source for ggml-cpu ( #1474 )
...
* Update CMakeLists.txt - Fix source for ggml-cpu
* Fixes std::min
adding compile define NOMINMAX seems to fix the further compile issues
2025-04-10 16:58:12 +08:00
Concedo
8acec907bb
revert sbti image write
2025-04-10 10:43:24 +08:00
Chenguang Li
fe5b78c896
CANN: Support more ops ( #12841 )
...
* [CANN]Support Opt LOG && MEAN && PAD_REFLECT_1D
* [CANN]Support COUNT_EQUAL && STEP && SGN
* [CANN]codestyle adjustment
* [CANN]codestyle adjustment
---------
Signed-off-by: noemotiovon <noemotiovon@gmail.com>
2025-04-10 08:51:52 +08:00
Prajwal B Mehendarkar
11d07e1e69
Fixes #12823 ( #12830 )
...
* Including limits file on AIX
* Fixes #12823
2025-04-10 01:18:01 +02:00
Rudi Servo
b0091ecc1e
docker : added all CPU to GPU images ( #12749 )
2025-04-10 01:17:12 +02:00
Piotr Kubaj
31f7803bc4
ggml-cpu-impl.h: do not redefine bool on POWER9 ( #12856 )
...
error: unknown type name '_Bool'
2025-04-10 01:00:34 +02:00
Piotr Kubaj
2391506ace
ggml-impl.h: fix build on POWER9 ( #12855 )
...
error: ISO C++17 does not allow 'register' storage class specifier
2025-04-10 01:00:25 +02:00
Concedo
27f575dc83
inpaining support completed, invert mask added
2025-04-09 23:50:17 +08:00
Concedo
23339ace9b
inpainting works in kcpp!
2025-04-09 23:01:05 +08:00
Concedo
fea3b2bd4a
updated sdcpp prepare for inpaint
...
fixed img2img (+1 squashed commits)
Squashed commits:
[42c48f14] try update sdcpp, feels kind of buggy
2025-04-09 20:26:10 +08:00
Bo Zheng
d3bd7193ba
llama : Support Qwen3 and Qwen3MoE ( #12828 )
...
* add qwen3 & qwen3moe support.
* fix
---------
Co-authored-by: bozheng-hit <dsoul0621@gmail.com>
2025-04-09 11:47:36 +02:00
R0CKSTAR
d9a63b2f2e
musa: enable freediskspace for docker image build ( #12839 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-04-09 11:22:30 +02:00
Romain Biessy
8ed71242f4
sycl: update documentation to use -no-cnv ( #12845 )
2025-04-09 11:22:04 +02:00
Plamen Minev
381603a775
ci: detach common from the library ( #12827 )
...
* fix: detach common from the library
* fix: building chat test template
2025-04-09 10:11:11 +02:00
Xuan-Son Nguyen
65a69e6e1b
clip : do not print ftype ( #12832 )
2025-04-09 10:09:53 +02:00
Georgi Gerganov
47277d6d1d
readme : add rpc backend ( #12842 )
2025-04-09 10:54:42 +03:00
Chenguang Li
6e1c4cebdb
CANN: Support Opt CONV_TRANSPOSE_1D and ELU ( #12786 )
...
* [CANN] Support ELU and CONV_TRANSPOSE_1D
* [CANN]Modification review comments
* [CANN]Modification review comments
* [CANN]name adjustment
* [CANN]remove lambda used in template
* [CANN]Use std::func instead of template
* [CANN]Modify the code according to the review comments
---------
Signed-off-by: noemotiovon <noemotiovon@gmail.com>
2025-04-09 14:04:14 +08:00
Jeff Bolz
0090950f67
vulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory ( #12833 )
...
q4_k and q5_k had a lot of redundant global loads where the same 16B of
scale information is repeatedly loaded and decoded during each loop iteration.
This change restructures the loops to more explicitly iterate over whole
blocks in the outer loop (with unrolled inner loop) and to copy/decode the
scale data into shared memory once at the start of each outer loop. The copy
is pipelined so the scale load from global memory is relatively cheap.
This improves q4_k/q5_k model prompt processing performance by around 5-7%.
I briefly tried applying this to q6_k and q4_0, and it didn't help for q6_k
and hurt for q4_0.
The big "else" path in mul_mm_cm2.comp that had all the clamped/unclamped
variants isn't used as often as it originally was (e.g. due to the padded_N
change), so I trimmed it down to offset some of the new complexity of the
semi-manual loop unrolling.
2025-04-09 07:25:08 +02:00
Jeff Bolz
7ecd780b1a
vulkan: Use fp16 for the flash attention P*V multiplication ( #12783 )
...
This is consistent with the ggml-cuda behavior and the mul_mat fallback.
2025-04-09 07:12:57 +02:00
Sigbjørn Skjæret
7538246e7c
cuda : add f32 to bf16 copy op ( #12806 )
...
This allows BF16 KV-cache on CUDA.
2025-04-08 23:21:31 +02:00
Matt Clayton
b32efad2bc
llava: improve clip_ctx destructor to not memleak load_image_size ( #12834 )
2025-04-08 22:01:58 +02:00
Georgi Gerganov
a19b5cef16
llama : fix FA when KV cache is not used (i.e. embeddings) ( #12825 )
...
* ggml : FA supports F32 V
* graph : cast KV to F16 when the KV cache is not used
ggml-ci
* server : add test that exercises embeddings with FA enabled
ggml-ci
2025-04-08 19:54:51 +03:00
Xuan-Son Nguyen
78a1ba0a4f
server : fix thread.join() on exit ( #12831 )
2025-04-08 18:37:06 +02:00
dm4
2dabf759e7
llava: add more helper functions to check projector types in clip context ( #12824 )
...
Signed-off-by: dm4 <sunrisedm4@gmail.com>
2025-04-08 15:49:13 +02:00
Concedo
ebf924c5d1
Merge branch 'upstream' into concedo_experimental
2025-04-08 21:46:30 +08:00