Concedo
f06007441b
troubleshooting
2025-04-04 22:11:49 +08:00
Concedo
fe11073ce6
cap auto threads at 32 due to diminishing returns
2025-04-04 22:01:55 +08:00
Concedo
0c7f8a1d43
troubleshooting
2025-04-04 19:08:51 +08:00
Concedo
3105eeec93
added queuing for sdui
2025-04-04 18:42:32 +08:00
Concedo
57e12b73af
try containerized ci (+1 squashed commits)
...
Squashed commits:
[fc53c200] try containerized ci (+1 squashed commits)
Squashed commits:
[4b48b0d5] try containerized ci
2025-04-04 17:19:27 +08:00
Concedo
4e740311fe
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# ci/run.sh
# docs/backend/SYCL.md
# docs/build.md
# ggml/src/ggml-vulkan/CMakeLists.txt
# ggml/src/ggml-vulkan/vulkan-shaders/CMakeLists.txt
# tests/test-chat-template.cpp
2025-04-04 15:07:47 +08:00
Concedo
c48a4a73d4
try fix file open
2025-04-04 14:38:17 +08:00
Concedo
43e9b049d6
another silly bug silly silly silly (tavern)
2025-04-04 14:16:42 +08:00
Jeff Bolz
74d4f5b041
vulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency ( #12630 )
...
There seems to be a bubble waking up from waitForFences, which costs a few
percent performance and also increased variance in performance. This change
inserts an "almost_ready" fence when the graph is about 80% complete and we
waitForFences for the almost_ready fence and then spin (with _mm_pauses) waiting
for the final fence to be signaled.
2025-04-04 07:54:35 +02:00
Jeff Bolz
35e592eb30
vulkan: set cmake minimum and project name in vulkan-shaders ( #12744 )
2025-04-04 07:53:20 +02:00
lhez
7d7b1bafa7
opencl: update doc for OpenCL ( #12702 )
...
* opencl: add OpenCL to build.md
* opencl: remove fixed issue/TODO
* opencl: add link to OPENCL.md
* opencl: update doc - refine tools requirement for Windows 11 arm64
2025-04-03 22:18:17 -07:00
Gaurav Garg
c262beddf2
CUDA: Prefer vector flash decoding kernel for Gemma models ( #12738 )
...
* Prefer vector flash decoding kernel for Gemma models
Vector flash decoding kernel was not being picked for models with head dimension 256. Gemma models are in this category.
Removing this limit improves e2e performance by upto 12% in gen phase throughput for Gemm models.
* Update ggml/src/ggml-cuda/fattn.cu
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-04-03 18:20:29 +02:00
yumeyao
5dd5d1ab00
vocab : use string_view::find() to avoid unnecessary looking up beyond the fragment range ( #12706 )
2025-04-03 18:32:54 +03:00
Jeff Bolz
1c059995e0
vulkan: Fix missing cmake logic for dot product extension ( #12721 )
2025-04-03 10:08:26 -05:00
Concedo
59b7796b96
binops does not need clblast anymore
2025-04-03 23:06:19 +08:00
Concedo
47768b2780
update lite
2025-04-03 21:05:39 +08:00
Concedo
8c74520586
added NO_VULKAN_EXTENSIONS flag to disable dp4a and coopmat if needed
2025-04-03 20:51:17 +08:00
Concedo
07a96d63fa
try to ensure correct file extension
2025-04-03 20:13:53 +08:00
Atharva Dubey
2004644b7a
ci : add env variable in ggml-ci and document the same in SYCL.md ( #12736 )
2025-04-03 15:12:39 +03:00
R0CKSTAR
5f696e88e0
sync : minja (inclusionAI/Ling) and update tests ( #12699 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-04-03 13:51:35 +02:00
Concedo
6e086bd309
fixed savedatafile bug, try remove unneeded old clblast code path
2025-04-03 19:11:27 +08:00
Concedo
103d60ed2c
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# common/common.cpp
# examples/batched-bench/batched-bench.cpp
# examples/batched/batched.cpp
# examples/export-lora/export-lora.cpp
# examples/gritlm/gritlm.cpp
# examples/parallel/parallel.cpp
# examples/passkey/passkey.cpp
# examples/speculative-simple/speculative-simple.cpp
# examples/speculative/speculative.cpp
# ggml/src/ggml-cann/CMakeLists.txt
# ggml/src/ggml-cann/acl_tensor.cpp
# ggml/src/ggml-cann/acl_tensor.h
# ggml/src/ggml-cann/aclnn_ops.cpp
# ggml/src/ggml-cann/aclnn_ops.h
# ggml/src/ggml-vulkan/CMakeLists.txt
# tests/test-arg-parser.cpp
# tests/test-backend-ops.cpp
2025-04-03 18:57:49 +08:00
a3sh
193c3e03a6
fix MUSA compiler warning ( #12704 )
...
* fix MUSA compiler warning
* replace (void) with GGML_UNUSED
2025-04-03 09:32:55 +02:00
Chenguang Li
65cfe136a0
CANN: Support operator SIN COS ARGMAX ( #12709 )
...
* [CANN]support sin cos argmax
Signed-off-by: noemotiovon <noemotiovon@gmail.com>
* [CANN]codestyle adjustment
Signed-off-by: noemotiovon <noemotiovon@gmail.com>
* [CANN]Remove redundant code
Signed-off-by: noemotiovon <noemotiovon@gmail.com>
---------
Signed-off-by: noemotiovon <noemotiovon@gmail.com>
Co-authored-by: noemotiovon <noemotiovon@gmail.com>
2025-04-03 15:18:08 +08:00
Alan Gray
3f9da22c2b
Simplify and improve CUDA graphs through use of indirect copy pointers ( #9017 )
...
* CUDA: Simplify and improve CUDA graphs through use of indirect copy pointers
Previously there was complexity in the CUDA graphs implementation due
frequently changing parameters to copy kernels associated with K and V
cache pointers. This patch simplifies by using indirection to avoid
such parameters frequently changing, avoiding the need for frequent
graph updates.
Fixes #12152
* Addressed comments
* fix HIP builds
* properly sync to stream
* removed ggml_cuda_cpy_fn_ptrs
* move stream sync before free
* guard to only use indirection with graphs
* style fixes
* check for errors
---------
Co-authored-by: slaren <slarengh@gmail.com>
2025-04-03 03:31:15 +02:00
hipudding
2a0dc97e56
CANN: Fix failed test cases ( #12708 )
...
* CANN: Fix memory waste in aclnn_tensor
* CANN: fix backend ops fail
* CANN: fix acl_tensor memory alloc.
* CANN: format
* CANN: remove trailing whitespace
2025-04-03 08:49:51 +08:00
lhez
97a20c012b
opencl: use max_alloc_size in backend ctx instead of querying again ( #12705 )
2025-04-02 17:01:42 -07:00
Jeff Bolz
f01bd02376
vulkan: Implement split_k for coopmat2 flash attention. ( #12627 )
...
When using group query attention, we have one workgroup per KV batch and this
can be very few workgroups (e.g. just 8 in some models). Enable split_k to
spread the work across SMs. This helps a lot when the KV cache is large.
2025-04-02 14:25:08 -05:00
bandoti
6f3bd38640
cmake: remove caching from vulkan coopmat checks ( #12719 )
2025-04-02 14:56:26 -03:00
Jeff Bolz
be0a0f8cae
vulkan: Implement grouped query attention in the coopmat2 FA shader ( #12559 )
...
When adjacent batches of Q share the same batches of K/V, batch them into
the same workgroup. For example, when:
dst(128,32,1,1) = FA(q(128,1,32,1), k(128,16640,8,1), v(128,16640,8,1))
previously we would run 32 workgroups computing 1 result each, now we will
run 8 workgroups computing 4 results each.
This doesn't directly translate to better performance (at least when you have
>=32 SMs), but in a subsequent change I'll enable split_k which will scale much
better with 4x fewer workgroups.
2025-04-02 19:40:32 +02:00
0cc4m
92e3006bb6
Vulkan: Fix mmq int dot float cache size ( #12722 )
2025-04-02 19:12:30 +02:00
Georgi Gerganov
833e2b7409
model : print tensor size during load ( #12711 )
...
* model : print tensor size during load
* cont : fix units MB -> MiB
Co-authored-by: Diego Devesa <slarengh@gmail.com>
---------
Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-04-02 16:38:54 +03:00
Diego Devesa
e0e912f49b
llama : add option to override model tensor buffers ( #11397 )
...
* llama : add option to override tensor buffers
* ggml : fix possible underflow in ggml_nbytes
2025-04-02 14:52:01 +02:00
Georgi Gerganov
a10b36c91a
llama : refactor kv cache guard ( #12695 )
...
* llama : refactor kv cache guard
ggml-ci
* cont : fix comment [no ci]
* llama : fix kv_cache restore logic
ggml-ci
* context : simplify kv cache updates
ggml-ci
* cont : better name [no ci]
* llama : fix llama_decode return code when could not find KV slot
ggml-ci
* context : change log err -> warn [no ci]
* kv-cache : add comment + warning
2025-04-02 14:32:59 +03:00
Concedo
7f1003be44
warning for max tokens being too high
2025-04-02 18:58:38 +08:00
Sigbjørn Skjæret
83a88bd6af
vocab : BailingMoE : change possessive quantifiers to greedy ( #12677 )
2025-04-02 11:21:48 +02:00
Xuan-Son Nguyen
42eb248f46
common : remove json.hpp from common.cpp ( #12697 )
...
* common : remove json.hpp from common.cpp
* fix comment
2025-04-02 09:58:34 +02:00
Chenguang Li
9bacd6b374
[CANN] get_rows and dup optimization ( #12671 )
...
* [CANN]get_rows and dup optimization.
Co-authored-by: hipudding <huafengchun@gmail.com>
Signed-off-by: noemotiovon <noemotiovon@gmail.com>
* [CANN]GET_ROWS and CPY/DUP optimization
Co-authored-by: hipudding <huafengchun@gmail.com>
Signed-off-by: noemotiovon <noemotiovon@gmail.com>
* [CANN]code style adjustment
Signed-off-by: noemotiovon <noemotiovon@gmail.com>
* [CANN]code style adjustment
Signed-off-by: noemotiovon <noemotiovon@gmail.com>
* [CANN]code style adjustment
Signed-off-by: noemotiovon <noemotiovon@gmail.com>
* [CANN]code style adjustment
Signed-off-by: noemotiovon <noemotiovon@gmail.com>
---------
Signed-off-by: noemotiovon <noemotiovon@gmail.com>
Co-authored-by: noemotiovon <noemotiovon@gmail.com>
Co-authored-by: hipudding <huafengchun@gmail.com>
2025-04-02 15:22:13 +08:00
Concedo
669311365c
fixed gemma system prompt
2025-04-02 13:58:51 +08:00
Xuan-Son Nguyen
267c1399f1
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
...
* (wip) refactor downloading system [no ci]
* fix all examples
* fix mmproj with -hf
* gemma3: update readme
* only handle mmproj in llava example
* fix multi-shard download
* windows: fix problem with std::min and std::max
* fix 2
2025-04-01 23:44:05 +02:00
Junil Kim
f423981ac8
opencl : fix memory allocation size ( #12649 )
...
issue:
https://github.com/CodeLinaro/llama.cpp/pull/17#issuecomment-2760611283
This patch fixes the memory allocation size
not exceeding the maximum size of the OpenCL device.
2025-04-01 09:54:34 -07:00
Concedo
fbf5c04c3c
silly me
2025-04-02 00:51:05 +08:00
Concedo
30e3d24ead
embd include name
2025-04-02 00:40:38 +08:00
Concedo
e37f27632f
clear cpu flag manually for templates, added truncation for embeddings
2025-04-02 00:18:30 +08:00
jklincn
e39e727e9a
llama : use LLM_KV_GENERAL_FILE_TYPE instead of gguf_find_key ( #12672 )
2025-04-01 14:54:28 +02:00
Sigbjørn Skjæret
5936a616e4
convert : BailingMoE : fix qkv split when head_dim is 0 ( #12687 )
...
NOTE: Ling-lite-base is broken, see https://huggingface.co/inclusionAI/Ling-lite-base/discussions/2
2025-04-01 14:37:13 +02:00
Concedo
8a4a9b8c19
Merge branch 'upstream' into concedo_experimental
2025-04-01 20:16:16 +08:00
Concedo
9e182b3e78
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .github/workflows/build.yml
# README.md
# docs/backend/SYCL.md
# ggml/src/ggml-sycl/CMakeLists.txt
# ggml/src/ggml-vulkan/CMakeLists.txt
# ggml/src/ggml-vulkan/ggml-vulkan.cpp
# scripts/sync-ggml.last
# tests/test-chat-template.cpp
2025-04-01 20:16:07 +08:00
Georgi Gerganov
3fd072a540
metal : use F32 prec in FA kernels ( #12688 )
...
* metal : use F32 prec in FA kernels
ggml-ci
* cont : fix FA vec kernel
ggml-ci
2025-04-01 14:57:19 +03:00
Concedo
0fd94e19f3
made tool calls more robust and allowed tool call template customization
2025-04-01 19:16:45 +08:00