Xuan-Son Nguyen
2f54e348ad
llama : fix build_ffn without gate ( #13336 )
...
* llama : fix build_ffn without gate
* fix build on windows
* Revert "fix build on windows"
This reverts commit fc420d3c7eef3481d3d2f313fef2757cb33a7c56.
2025-05-06 14:25:40 +02:00
Johannes Gäßler
2356fb1d53
CUDA: fix bad asserts for partial offload ( #13337 )
2025-05-06 13:58:51 +02:00
Concedo
13cee48740
embed aria2c for windows, add slowness check with highpriority recommendation (+1 squashed commits)
...
Squashed commits:
[b9b695217] embed aria2c for windows, add slowness check with highpriority recommendation (+1 squashed commits)
Squashed commits:
[90b5d389d] embed aria2c for windows, add slowness check with highpriority recommendation (+1 squashed commits)
Squashed commits:
[fbbaa989f] embed aria2c for windows
2025-05-06 18:56:02 +08:00
Sigbjørn Skjæret
764b85627b
convert : qwen2/3moe : set yarn metadata if present ( #13331 )
...
* set yarn metadata if present
* add comment about enabling YaRN
Co-authored-by: Xuan-Son Nguyen <son@huggingface.co>
---------
Co-authored-by: Xuan-Son Nguyen <son@huggingface.co>
2025-05-06 11:12:06 +02:00
Concedo
9981ba8427
glm4 special BOS handling
2025-05-06 16:41:55 +08:00
Johannes Gäßler
15a28ec8c7
CUDA: fix --split-mode row for MMQ ( #13323 )
2025-05-06 08:36:46 +02:00
compilade
a7366faa5b
gguf-py : avoid requiring pyside6 for other scripts ( #13036 )
...
- gguf-py : remove gguf-py/gguf/scripts/__init__.py because it's not needed
Implicit namespaces are supported since Python 3.3 (https://peps.python.org/pep-0420/ ),
and the entrypoints in pyproject.toml can directly refer to the main functions.
2025-05-05 22:27:31 -04:00
Jeff Bolz
005756a2a9
vulkan: scalar flash attention implementation
2025-05-05 19:40:45 -05:00
Johannes Gäßler
9070365020
CUDA: fix logic for clearing padding with -ngl 0 ( #13320 )
2025-05-05 22:32:13 +02:00
oobabooga
233461f812
sampling : Integrate Top-nσ into main sampling chain (and add it to the server) ( #13264 )
...
* sampling: add Top-nσ sampler to `llama-server` and sampler ordering
* revert: sampler ordering
* revert: VS' crappy auto-formatting
* revert: VS' crappy auto-formatting pt.2
* revert: my crappy eye sight...
* sampling: add XTC to Top-nσ sampler chain
* sampling: add Dyna. Temp. to Top-nσ sampler chain
* sampling: actually remove Top-nσ from sampler(oops)
* Integrate top_n_sigma into main sampler chain
* Define COMMON_SAMPLER_TYPE_TOP_N_SIGMA
* Formatting
* Lint
* Exit early in the sampler if nsigma < 0
---------
Co-authored-by: CasualAutopsy <casual_autopsy@outlook.com>
2025-05-05 22:12:19 +02:00
Concedo
f59b5eb561
added toggle for guidance
2025-05-05 22:21:46 +08:00
igardev
b34c859146
server : Webui - change setText command from parent window to also send the message. ( #13309 )
...
* setText command from parent window for llama-vscode now sends the message automatically.
* Upgrade packages versions to fix vulnerabilities with "npm audit fix" command.
* Fix code formatting.
* Add index.html.gz changes.
* Revert "Upgrade packages versions to fix vulnerabilities with "npm audit fix" command."
This reverts commit 67687b7fda8a293724ba92ea30bb151677406bc8.
* easier approach
* add setTimeout
---------
Co-authored-by: igardev <ivailo.gardev@akros.ch>
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2025-05-05 16:03:31 +02:00
Xuan-Son Nguyen
9b61acf060
mtmd : rename llava directory to mtmd ( #13311 )
...
* mv llava to mtmd
* change ref everywhere
2025-05-05 16:02:55 +02:00
Xuan-Son Nguyen
5215b91e93
clip : fix confused naming ffn_up and ffn_down ( #13290 )
...
* clip : fix confused naming ffn_up and ffn_down
* rm ffn_i/o/g naming
* rename n_embd, n_ff
* small fix
* no check n_ff
2025-05-05 12:54:44 +02:00
Sigbjørn Skjæret
ae803bfc3d
convert : bailingmoe : set yarn metadata if present ( #13312 )
2025-05-05 12:34:26 +02:00
Concedo
41142ad67a
try sm35
2025-05-05 17:35:11 +08:00
Akarshan Biswas
66645a5285
SYCL: Disable mul_mat kernels for noncontiguous tensor b ( #13308 )
...
ggml-ci
2025-05-05 13:39:10 +05:30
Xuan-Son Nguyen
27aa259532
mtmd : add C public API ( #13184 )
...
* init
* wip
* working version
* add mtmd::bitmaps
* add test target
* rm redundant define
* test: mtmd_input_chunks_free
* rm outdated comment
* fix merging issue
* explicitly create mtmd::input_chunks
* mtmd_input_chunk_copy
* add clone()
* add const to various places
* add warning about breaking changes
* helper: use mtmd_image_tokens_get_n_pos
2025-05-04 23:43:42 +02:00
Diego Devesa
9fdfcdaedd
rpc : use backend registry, support dl backends ( #13304 )
2025-05-04 21:25:43 +02:00
Aaron Teo
6eb7d25c70
ggml : activate s390x simd for Q3_K ( #13301 )
...
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-05-04 19:49:12 +02:00
Diego Devesa
86bd60d3fe
llava/mtmd : fixes to fully support dl backends ( #13303 )
2025-05-04 17:05:20 +02:00
Diego Devesa
9f2da5871f
llama : build windows releases with dl backends ( #13220 )
2025-05-04 14:20:49 +02:00
Johannes Gäßler
93c4e23905
CUDA: fix race condition in MMQ stream-k fixup ( #13299 )
2025-05-04 14:16:39 +02:00
Johannes Gäßler
8afbd96818
CUDA: fix race condition in MMQ ids_dst ( #13294 )
2025-05-04 13:58:38 +02:00
Jeff Bolz
8ae5ebcf85
vulkan: Additional type support for unary, binary, and copy ( #13266 )
...
Support f16->f32 copy.
Support f16->f16 and f32->f32 unary ops.
Support all combinations of f16/f32 for src0/src1/dst for add/sub/mul/div.
2025-05-04 07:17:16 +02:00
Johannes Gäßler
3e959f0976
imatrix: fix oob writes if src1 is not contiguous ( #13286 )
2025-05-04 00:50:37 +02:00
Xuan-Son Nguyen
36667c8edc
clip : revert the change of BOI/EOI token for GLM-edge ( ⚠️ breaking change) ( #13259 )
2025-05-03 20:07:54 +02:00
Concedo
a0498ad1b1
update sdui fixed resetting
2025-05-04 00:16:08 +08:00
ymcki
3bf785f3ef
llama : Llama-3_1-Nemotron-Ultra-253B-v1 support ( #12843 )
2025-05-03 17:39:51 +02:00
Concedo
5a2808ffaf
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .flake8
# .github/labeler.yml
# .github/workflows/bench.yml.disabled
# .github/workflows/build-linux-cross.yml
# .github/workflows/build.yml
# .github/workflows/server.yml
# .gitignore
# CMakeLists.txt
# CODEOWNERS
# Makefile
# README.md
# SECURITY.md
# build-xcframework.sh
# ci/run.sh
# docs/development/HOWTO-add-model.md
# docs/multimodal/MobileVLM.md
# docs/multimodal/glmedge.md
# docs/multimodal/llava.md
# docs/multimodal/minicpmo2.6.md
# docs/multimodal/minicpmv2.5.md
# docs/multimodal/minicpmv2.6.md
# examples/CMakeLists.txt
# examples/pydantic_models_to_grammar_examples.py
# grammars/README.md
# pyrightconfig.json
# requirements/requirements-all.txt
# scripts/fetch_server_test_models.py
# scripts/tool_bench.py
# scripts/xxd.cmake
# tests/CMakeLists.txt
# tests/run-json-schema-to-grammar.mjs
# tools/batched-bench/CMakeLists.txt
# tools/batched-bench/README.md
# tools/batched-bench/batched-bench.cpp
# tools/cvector-generator/CMakeLists.txt
# tools/cvector-generator/README.md
# tools/cvector-generator/completions.txt
# tools/cvector-generator/cvector-generator.cpp
# tools/cvector-generator/mean.hpp
# tools/cvector-generator/negative.txt
# tools/cvector-generator/pca.hpp
# tools/cvector-generator/positive.txt
# tools/export-lora/CMakeLists.txt
# tools/export-lora/README.md
# tools/export-lora/export-lora.cpp
# tools/gguf-split/CMakeLists.txt
# tools/gguf-split/README.md
# tools/imatrix/CMakeLists.txt
# tools/imatrix/README.md
# tools/imatrix/imatrix.cpp
# tools/llama-bench/CMakeLists.txt
# tools/llama-bench/README.md
# tools/llama-bench/llama-bench.cpp
# tools/llava/CMakeLists.txt
# tools/llava/README.md
# tools/llava/android/adb_run.sh
# tools/llava/android/build_64.sh
# tools/llava/clip-quantize-cli.cpp
# tools/main/CMakeLists.txt
# tools/main/README.md
# tools/perplexity/CMakeLists.txt
# tools/perplexity/README.md
# tools/perplexity/perplexity.cpp
# tools/quantize/CMakeLists.txt
# tools/rpc/CMakeLists.txt
# tools/rpc/README.md
# tools/rpc/rpc-server.cpp
# tools/run/CMakeLists.txt
# tools/run/README.md
# tools/run/linenoise.cpp/linenoise.cpp
# tools/run/linenoise.cpp/linenoise.h
# tools/run/run.cpp
# tools/server/CMakeLists.txt
# tools/server/README.md
# tools/server/bench/README.md
# tools/server/public_simplechat/readme.md
# tools/server/tests/README.md
# tools/server/themes/README.md
# tools/server/themes/buttons-top/README.md
# tools/server/themes/wild/README.md
# tools/tokenize/CMakeLists.txt
# tools/tokenize/tokenize.cpp
2025-05-03 12:15:36 +08:00
Concedo
b258e23003
fixed bad merge
2025-05-03 11:43:01 +08:00
Concedo
0951ad9f58
temp merge, not working
2025-05-03 11:42:01 +08:00
Concedo
1228f91ccb
even better comfyui handling, dynamic node ids
2025-05-03 11:21:22 +08:00
Concedo
6cb36ce1ae
better zenity checks for multilingual
2025-05-03 10:09:47 +08:00
Diego Devesa
1d36b3670b
llama : move end-user examples to tools directory ( #13249 )
...
* llama : move end-user examples to tools directory
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2025-05-02 20:27:13 +02:00
Georgi Gerganov
b34443923c
sync : ggml ( #13268 )
...
* vulkan : kernels for depthwise 2D convolution (CONV_2D_DW) (ggml/1204)
* vulkan : add kernels for depthwise 2d convolution (OP_CONV_2D_DW)
* review: remove src_x/y < 0 checks; add performance tests
* sync : ggml
ggml-ci
* vulkan : fix lint (#0 )
---------
Co-authored-by: Acly <aclysia@gmail.com>
2025-05-02 20:54:30 +03:00
Georgi Gerganov
a75cb30dc9
context : fix reorder logic ( #13267 )
...
ggml-ci
2025-05-02 20:54:13 +03:00
shalinib-ibm
3f3769ba76
ggml : Enable MMA for BF16 in llamafile_sgemm ( #13148 )
...
This patch upstreams llamafile's cpu matrix multiplication kernels for ppc64le using MMA builtins for BF16 data type.
This change results in 9x - 40x gains
in total speed S t/s (ie all tokens/total time), across various batch sizes tested using llama-batched-bench benchmark.
The patch is tested with Meta-Lllama-3-8B,
and Mistral-7B models (BF16 models generated by using llama-quantize from corresponding FP32 models) on an IBM POWER10 machine.
Signed-off-by: Shalini Salomi Bodapati <Shalini.Salomi.Bodapati@ibm.com>
2025-05-02 19:53:12 +03:00
Jared Van Bortel
2f567611c0
llama-model : support Qwen2 embedding models and pooling_mode_lasttoken ( #13245 )
2025-05-02 11:42:30 -04:00
Jared Van Bortel
7d2123484e
convert : use correct context length for nomic-embed-text-v2 ( #13216 )
2025-05-02 11:41:54 -04:00
Xuan-Son Nguyen
074e42ab31
convert : converting mmproj for Qwen2/2.5VL from convert_hf_to_gguf ( #13209 )
...
* wip
* qwen2.5vl ok
* vision: fix models missing "text_config"
* add test
* fix test repo name
* fix 32B model
* Revert "fix 32B model"
This reverts commit 651752f1ae25fe8a01c1e57c18cf2eca80b2774e.
* clarify about 32B
* rm qwen surgery script
* update llava/readme
* move V_ENC_EMBD_PATCH handling to Qwen2VLVisionModel
2025-05-02 17:17:15 +02:00
Georgi Gerganov
c642bc014c
kv-cache : separate recurrent vs non-recurrent impl ( #12799 )
...
* kv-cache : serparate recurrent vs non-recurrent impl (wip)
ggml-ci
* kv-cache : init -> contructor + add llama_memory_params
ggml-ci
* kv-cache : fix callback reference
ggml-ci
* context : llama_kv_cache -> llama_memory_i
ggml-ci
* context : move memory creation logic to model
ggml-ci
* llama : remove reference of memory during encode
ggml-ci
* kv-cache : hide padding details in the implementation
ggml-ci
* kv-cache : add ubatch_next()
ggml-ci
* context : simplify sbatch logic
ggml-ci
* kv-cache : hide defrag logic in the implementation
ggml-ci
* context : hide kv cache details in implementation
ggml-ci
* build : fix
ggml-ci
* cont : another fix
ggml-ci
* kv-cache : simplify interface (wip)
ggml-ci
* kv-cache : use separate KV cell structs for unified/recurrent
ggml-ci
* kv-cache : clean-up
ggml-ci
* model : better llama_model::create_model() signature
ggml-ci
* kv-cache : fix recurrent seq_rm()
ggml-ci
* kv-cache : replace `struct callbacks` with `llama_model &`
ggml-ci
* kv-cache : replace `struct graph_params` with `llama_context &`
ggml-ci
* kv-cache : fix offload check
ggml-ci
* context : avoid passing unique_ptr
ggml-ci
* kv-cache : avoid using the backends from the llama_context
ref #13113
ggml-ci
* kv-cache : more consistent debug logs [no ci]
* kv-cache : do not pass the full llama_context for kv graphs
ggml-ci
* kv-cache : remove comment
* kv-cache : ggml_rope_ext_inplace -> ggml_rope_ext
ggml-ci
* kv-cache : fix recurrent multi-user case
ggml-ci
* memory : remove comments [no ci]
2025-05-02 17:48:36 +03:00
Concedo
17cbf9fd49
plamo fixed
2025-05-02 22:46:17 +08:00
Concedo
423a68c45d
multipart downloading up to 9 parts
2025-05-02 22:34:20 +08:00
Sigbjørn Skjæret
cb06a3c363
llama : orion rope type is neox ( #13261 )
2025-05-02 12:44:24 +02:00
Sigbjørn Skjæret
626083faf7
llama : plamo rope type is neox ( #13260 )
2025-05-02 12:40:56 +02:00
piDack
2af6880178
llama-chat : reset glmedge chat template ( #13253 )
...
* reset glmedge chat template
* fix glmedge chat template
2025-05-02 11:06:09 +02:00
Concedo
d8f1f73dd7
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .github/workflows/build-linux-cross.yml
# .github/workflows/build.yml
# cmake/build-info.cmake
# common/CMakeLists.txt
# examples/llava/README.md
# examples/server/README.md
# ggml/CMakeLists.txt
# ggml/src/ggml-cuda/CMakeLists.txt
# ggml/src/ggml-rpc/ggml-rpc.cpp
# ggml/src/ggml-vulkan/CMakeLists.txt
# ggml/src/ggml-vulkan/vulkan-shaders/CMakeLists.txt
# scripts/sync-ggml.last
# tests/test-backend-ops.cpp
# tests/test-chat-template.cpp
2025-05-02 16:54:15 +08:00
Concedo
ca53d1bedc
Merge commit ' 13c9a3319b
' into concedo_experimental
...
# Conflicts:
# ggml/src/ggml-cpu/CMakeLists.txt
# scripts/sync-ggml.last
# tests/test-backend-ops.cpp
2025-05-02 16:42:16 +08:00
Concedo
7694cf9bfb
fix rope bug (+1 squashed commits)
...
Squashed commits:
[5bf69efe0] fix rope bug
2025-05-02 16:35:01 +08:00