Commit graph

154 commits

Author SHA1 Message Date
Concedo
59300dbdf5 Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.github/actions/windows-setup-curl/action.yml
#	.github/workflows/build-linux-cross.yml
#	README.md
#	common/CMakeLists.txt
#	examples/parallel/README.md
#	examples/parallel/parallel.cpp
#	ggml/src/ggml-sycl/element_wise.cpp
#	ggml/src/ggml-vulkan/CMakeLists.txt
#	tools/server/README.md
2025-05-18 23:27:53 +08:00
Isaac McFadyen
6a2bc8bfb7
server : added --no-prefill-assistant flag (#13608)
* added no-prefill-assistant flag

* reworded documentation comment

* updated server README.md
2025-05-17 23:59:48 +02:00
Georgi Gerganov
518329b2d4
parallel : add option for non-shared and larger prompts (#13598)
* parallel : add option for non-shared and larger prompts

* parallel : update readme [no ci]

* cont : add note about base models [no ci]

* parallel : better var name

ggml-ci
2025-05-17 12:58:55 +03:00
Concedo
21e31e255b Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.github/workflows/build.yml
#	.github/workflows/docker.yml
#	README.md
#	build-xcframework.sh
#	common/CMakeLists.txt
#	examples/CMakeLists.txt
#	ggml/src/ggml-cpu/CMakeLists.txt
#	ggml/src/ggml-cuda/CMakeLists.txt
#	ggml/src/ggml-metal/ggml-metal.m
#	ggml/src/ggml-metal/ggml-metal.metal
#	ggml/src/ggml-sycl/CMakeLists.txt
#	ggml/src/ggml-sycl/backend.hpp
#	ggml/src/ggml-sycl/common.hpp
#	ggml/src/ggml-sycl/ggml-sycl.cpp
#	ggml/src/ggml-sycl/mmvq.cpp
#	ggml/src/ggml-sycl/vecdotq.hpp
#	scripts/compare-llama-bench.py
#	src/CMakeLists.txt
#	src/llama-model.cpp
#	src/llama.cpp
#	tests/test-backend-ops.cpp
#	tests/test-opt.cpp
#	tools/llama-bench/README.md
#	tools/llama-bench/llama-bench.cpp
#	tools/mtmd/CMakeLists.txt
#	tools/mtmd/README.md
#	tools/mtmd/clip.cpp
#	tools/rpc/rpc-server.cpp
#	tools/server/CMakeLists.txt
#	tools/server/README.md
2025-05-13 00:28:35 +08:00
David Huang
7f323a589f
Add --no-op-offload to improve -ot pp perf in MoE models like llama4 400B (#13386) 2025-05-11 14:18:39 +02:00
Xuan-Son Nguyen
7fef11766c
arg : add env var to control mmproj (#13416)
* arg : add env var to control mmproj

* small note about -hf --mmproj
2025-05-10 08:16:29 +02:00
Xuan-Son Nguyen
33eff40240
server : vision support via libmtmd (#12898)
* server : (experimental) vision support via libmtmd

* mtmd : add more api around mtmd_image_tokens

* mtmd : add more api around mtmd_image_tokens

* mtmd : ability to calc image hash

* shared_ptr for mtmd_image_tokens

* move hash to user-define ID (fixed)

* abstract out the batch management

* small fix

* refactor logic adding tokens to batch

* implement hashing image

* use FNV hash, now hash bitmap instead of file data

* allow decoding image embedding to be split into batches

* rm whitespace

* disable some features when mtmd is on

* fix --no-mmproj-offload

* mtmd_context_params no timings

* refactor server_inp to server_tokens

* fix the failing test case

* init

* wip

* working version

* add mtmd::bitmaps

* add test target

* rm redundant define

* test: mtmd_input_chunks_free

* rm outdated comment

* fix merging issue

* explicitly create mtmd::input_chunks

* mtmd_input_chunk_copy

* add clone()

* improve server_input struct

* clip :  fix confused naming ffn_up and ffn_down

* rm ffn_i/o/g naming

* rename n_embd, n_ff

* small fix

* no check n_ff

* fix detokenize

* add const to various places

* add warning about breaking changes

* add c api

* helper: use mtmd_image_tokens_get_n_pos

* fix ctx_shift

* fix name shadowing

* more strict condition

* support remote image_url

* remote image_url log

* add CI test

* do not log base64

* add "has_multimodal" to /props

* remove dangling image

* speculative: use slot.cache_tokens.insert

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* rm can_be_detokenized

* on prmpt processing done, assert cache_tokens.size

* handle_completions_impl returns void

* adapt the new web ui

* update docs and hot topics

* rm assert

* small fix (2)

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-05-09 19:29:37 +02:00
Concedo
6bb44391bd Merge commit '5c86c9ed3e' into concedo_experimental
# Conflicts:
#	tools/imatrix/imatrix.cpp
#	tools/mtmd/README.md
#	tools/run/README.md
#	tools/run/run.cpp
2025-05-10 00:30:18 +08:00
Bartowski
efb8b47eda
imatrix : Add --parse-special for enabling parsing of special tokens in imatrix calculation (#13389)
* Add --parse-special for enabling parsing of special tokens in imatrix calculation

* whitespace
2025-05-09 11:53:58 +02:00
Concedo
2439014a03 Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.github/workflows/build.yml
#	examples/embedding/embedding.cpp
#	tools/imatrix/imatrix.cpp
#	tools/perplexity/perplexity.cpp
2025-05-08 23:41:02 +08:00
Concedo
b6220669f4 Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.github/workflows/docker.yml
#	Makefile
#	examples/CMakeLists.txt
#	ggml/CMakeLists.txt
#	ggml/src/CMakeLists.txt
#	ggml/src/ggml-sycl/common.hpp
#	ggml/src/ggml-sycl/convert.cpp
#	ggml/src/ggml-sycl/convert.hpp
#	ggml/src/ggml-sycl/ggml-sycl.cpp
#	scripts/sync-ggml.last
2025-05-08 23:07:33 +08:00
Georgi Gerganov
51fb96b1ff
context : remove logits_all flag (#13284)
* context : remove logits_all flag

ggml-ci

* llama : remove logits_all flag + reorder llama_context_params

ggml-ci
2025-05-08 14:26:50 +03:00
Georgi Gerganov
4773d7a02f
examples : remove infill (#13283)
ggml-ci
2025-05-07 10:28:02 +03:00
Concedo
0fa435b2a6 Merge commit '9b61acf060' into concedo_experimental
# Conflicts:
#	Makefile
#	docs/multimodal/MobileVLM.md
#	docs/multimodal/glmedge.md
#	docs/multimodal/llava.md
#	docs/multimodal/minicpmo2.6.md
#	docs/multimodal/minicpmv2.5.md
#	docs/multimodal/minicpmv2.6.md
#	requirements/requirements-all.txt
#	tools/mtmd/CMakeLists.txt
#	tools/mtmd/README.md
#	tools/mtmd/android/adb_run.sh
#	tools/mtmd/android/build_64.sh
#	tools/mtmd/clip-quantize-cli.cpp
2025-05-06 23:34:21 +08:00
Xuan-Son Nguyen
9b61acf060
mtmd : rename llava directory to mtmd (#13311)
* mv llava to mtmd

* change ref everywhere
2025-05-05 16:02:55 +02:00
Concedo
5a2808ffaf Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.flake8
#	.github/labeler.yml
#	.github/workflows/bench.yml.disabled
#	.github/workflows/build-linux-cross.yml
#	.github/workflows/build.yml
#	.github/workflows/server.yml
#	.gitignore
#	CMakeLists.txt
#	CODEOWNERS
#	Makefile
#	README.md
#	SECURITY.md
#	build-xcframework.sh
#	ci/run.sh
#	docs/development/HOWTO-add-model.md
#	docs/multimodal/MobileVLM.md
#	docs/multimodal/glmedge.md
#	docs/multimodal/llava.md
#	docs/multimodal/minicpmo2.6.md
#	docs/multimodal/minicpmv2.5.md
#	docs/multimodal/minicpmv2.6.md
#	examples/CMakeLists.txt
#	examples/pydantic_models_to_grammar_examples.py
#	grammars/README.md
#	pyrightconfig.json
#	requirements/requirements-all.txt
#	scripts/fetch_server_test_models.py
#	scripts/tool_bench.py
#	scripts/xxd.cmake
#	tests/CMakeLists.txt
#	tests/run-json-schema-to-grammar.mjs
#	tools/batched-bench/CMakeLists.txt
#	tools/batched-bench/README.md
#	tools/batched-bench/batched-bench.cpp
#	tools/cvector-generator/CMakeLists.txt
#	tools/cvector-generator/README.md
#	tools/cvector-generator/completions.txt
#	tools/cvector-generator/cvector-generator.cpp
#	tools/cvector-generator/mean.hpp
#	tools/cvector-generator/negative.txt
#	tools/cvector-generator/pca.hpp
#	tools/cvector-generator/positive.txt
#	tools/export-lora/CMakeLists.txt
#	tools/export-lora/README.md
#	tools/export-lora/export-lora.cpp
#	tools/gguf-split/CMakeLists.txt
#	tools/gguf-split/README.md
#	tools/imatrix/CMakeLists.txt
#	tools/imatrix/README.md
#	tools/imatrix/imatrix.cpp
#	tools/llama-bench/CMakeLists.txt
#	tools/llama-bench/README.md
#	tools/llama-bench/llama-bench.cpp
#	tools/llava/CMakeLists.txt
#	tools/llava/README.md
#	tools/llava/android/adb_run.sh
#	tools/llava/android/build_64.sh
#	tools/llava/clip-quantize-cli.cpp
#	tools/main/CMakeLists.txt
#	tools/main/README.md
#	tools/perplexity/CMakeLists.txt
#	tools/perplexity/README.md
#	tools/perplexity/perplexity.cpp
#	tools/quantize/CMakeLists.txt
#	tools/rpc/CMakeLists.txt
#	tools/rpc/README.md
#	tools/rpc/rpc-server.cpp
#	tools/run/CMakeLists.txt
#	tools/run/README.md
#	tools/run/linenoise.cpp/linenoise.cpp
#	tools/run/linenoise.cpp/linenoise.h
#	tools/run/run.cpp
#	tools/server/CMakeLists.txt
#	tools/server/README.md
#	tools/server/bench/README.md
#	tools/server/public_simplechat/readme.md
#	tools/server/tests/README.md
#	tools/server/themes/README.md
#	tools/server/themes/buttons-top/README.md
#	tools/server/themes/wild/README.md
#	tools/tokenize/CMakeLists.txt
#	tools/tokenize/tokenize.cpp
2025-05-03 12:15:36 +08:00
Diego Devesa
1d36b3670b
llama : move end-user examples to tools directory (#13249)
* llama : move end-user examples to tools directory

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2025-05-02 20:27:13 +02:00
Concedo
d8f1f73dd7 Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.github/workflows/build-linux-cross.yml
#	.github/workflows/build.yml
#	cmake/build-info.cmake
#	common/CMakeLists.txt
#	examples/llava/README.md
#	examples/server/README.md
#	ggml/CMakeLists.txt
#	ggml/src/ggml-cuda/CMakeLists.txt
#	ggml/src/ggml-rpc/ggml-rpc.cpp
#	ggml/src/ggml-vulkan/CMakeLists.txt
#	ggml/src/ggml-vulkan/vulkan-shaders/CMakeLists.txt
#	scripts/sync-ggml.last
#	tests/test-backend-ops.cpp
#	tests/test-chat-template.cpp
2025-05-02 16:54:15 +08:00
Concedo
ca53d1bedc Merge commit '13c9a3319b' into concedo_experimental
# Conflicts:
#	ggml/src/ggml-cpu/CMakeLists.txt
#	scripts/sync-ggml.last
#	tests/test-backend-ops.cpp
2025-05-02 16:42:16 +08:00
Georgi Gerganov
fab647e884
server : add cache reuse card link to help (#13230)
* server : add cache reuse card link to help

* args : use short url
2025-05-02 09:48:31 +03:00
Xuan-Son Nguyen
13c9a3319b
arg : remove CURLINFO_EFFECTIVE_METHOD (#13228) 2025-05-01 10:23:25 +02:00
Xuan-Son Nguyen
6f67cf1f48
arg : -hf do not fail if url mismatch (#13219)
* arg : -hf do not fail if url mismatch

* do not return if cannot parse metadata json
2025-04-30 21:29:15 +01:00
Olivier Chafik
3b127c7385
common : add -jf / --json-schema-file flag (#12011) 2025-04-30 14:52:35 +02:00
Concedo
8273739412 Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.devops/cpu.Dockerfile
#	.devops/cuda.Dockerfile
#	.devops/intel.Dockerfile
#	.devops/llama-cli-cann.Dockerfile
#	.devops/musa.Dockerfile
#	.devops/rocm.Dockerfile
#	.devops/vulkan.Dockerfile
#	examples/llama-bench/llama-bench.cpp
#	examples/rpc/rpc-server.cpp
#	scripts/compare-llama-bench.py
#	tests/test-quantize-stats.cpp
2025-04-30 17:22:18 +08:00
Xuan-Son Nguyen
5933e6fdc9
arg : allow using -hf offline (#13202)
* arg : allow using -hf offline

* add more comments in code [no ci]
2025-04-30 10:46:32 +02:00
Concedo
b2ecfa0f55 Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	README.md
#	examples/llama-bench/README.md
#	examples/llama-bench/llama-bench.cpp
#	examples/llava/CMakeLists.txt
#	ggml/src/ggml-rpc/ggml-rpc.cpp
#	ggml/src/ggml-sycl/common.hpp
#	ggml/src/ggml-sycl/element_wise.cpp
#	ggml/src/ggml-sycl/element_wise.hpp
#	ggml/src/ggml-sycl/ggml-sycl.cpp
#	tests/test-chat-template.cpp
2025-04-29 21:05:16 +08:00
Georgi Gerganov
43f2b07193
common : fix noreturn compile warning (#13151)
ggml-ci
2025-04-28 11:57:19 +03:00
Xuan-Son Nguyen
85f36e5e71
arg : fix unused variable (#13142) 2025-04-28 08:16:59 +03:00
Concedo
36c8db1248 Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	examples/llava/clip-impl.h
#	examples/llava/clip.cpp
#	tests/test-arg-parser.cpp
#	tests/test-json-schema-to-grammar.cpp
2025-04-27 12:51:02 +08:00
Xuan-Son Nguyen
2d451c8059
common : add common_remote_get_content (#13123)
* common : add common_remote_get_content

* support max size and timeout

* add tests
2025-04-26 22:58:12 +02:00
Concedo
6b6597ebf1 allow for single token prompt processing (actual batch size 1) 2025-04-25 16:54:46 +08:00
Georgi Gerganov
13b4548877
cmake : do not include ./src as public for libllama (#13062)
* cmake : do not include ./src as public for libllama

ggml-ci

* cmake : rework tests

ggml-ci

* llguidance : remove unicode include

ggml-ci

* cmake : make c++17 private

ggml-ci
2025-04-24 16:00:10 +03:00
Xuan-Son Nguyen
7c727fbe39
arg : add --no-mmproj-offload (#13093)
* arg : add --no-mmproj-offload

* Update common/arg.cpp
2025-04-24 14:04:14 +02:00
Xuan-Son Nguyen
80982e815e
arg : clean up handling --mmproj with -hf (#13082)
* arg : clean up handling --mmproj with -hf

* rm change about no_mmproj

* Revert "rm change about no_mmproj"

This reverts commit 2cac8e0efb629d66c612f137e75d562f94bb9e6c.

* handle no_mmproj explicitly

* skip download mmproj on examples not using it
2025-04-24 12:14:13 +02:00
Concedo
8f1edcbdac Merge commit 'dc39a5e7a8' into concedo_experimental
# Conflicts:
#	README.md
#	SECURITY.md
#	docs/multimodal/MobileVLM.md
#	examples/llava/CMakeLists.txt
#	examples/llava/README.md
#	examples/llava/android/adb_run.sh
#	ggml/CMakeLists.txt
#	ggml/src/CMakeLists.txt
#	ggml/src/ggml-cpu/CMakeLists.txt
#	ggml/src/ggml-sycl/ggml-sycl.cpp
#	ggml/src/ggml-sycl/rope.cpp
#	ggml/src/ggml-sycl/rope.hpp
2025-04-24 11:49:08 +08:00
Xuan-Son Nguyen
243453533e
llava : update documentations (#13055)
* llava : update documentations

* fix typo
2025-04-22 10:37:00 +02:00
Xuan-Son Nguyen
84a9bf2fc2
mtmd : merge llava, gemma3 and minicpmv CLI into single llama-mtmd-cli (#13012)
* mtmd : merge `llava-cli` and `gemma3-cli` into single `mtmd-cli`

* support for minicpmv

* remove cpp files of llava and minicpmv

* update hot topics

* mtmd : add not supported msg for qwen2vl

* Update examples/llava/mtmd.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-04-21 15:32:58 +02:00
Concedo
a0ae187563 Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.github/workflows/docker.yml
#	README.md
#	build-xcframework.sh
#	examples/llava/CMakeLists.txt
#	examples/llava/clip.cpp
#	examples/rpc/rpc-server.cpp
#	examples/run/run.cpp
#	ggml/src/ggml-cann/ggml-cann.cpp
#	scripts/sync-ggml-am.sh
#	scripts/sync-ggml.last
#	tests/test-backend-ops.cpp
#	tests/test-chat.cpp
2025-04-12 10:06:47 +08:00
tastelikefeet
b2034c2b55
contrib: support modelscope community (#12664)
* support download from modelscope

* support login

* remove comments

* add arguments

* fix code

* fix win32

* test passed

* fix readme

* revert readme

* change to MODEL_ENDPOINT

* revert tail line

* fix readme

* refactor model endpoint

* remove blank line

* fix header

* fix as comments

* update comment

* update readme

---------

Co-authored-by: tastelikefeet <yuze.zyz@alibaba-inc/com>
2025-04-11 14:01:56 +02:00
Concedo
ebf924c5d1 Merge branch 'upstream' into concedo_experimental 2025-04-08 21:46:30 +08:00
Concedo
822cf2430e Merge commit 'f1e3eb4249' into concedo_experimental
# Conflicts:
#	.github/workflows/build.yml
#	README.md
#	docs/backend/SYCL.md
#	examples/llava/clip.cpp
#	ggml/src/ggml-sycl/CMakeLists.txt
#	ggml/src/ggml-vulkan/cmake/host-toolchain.cmake.in
2025-04-08 20:48:53 +08:00
Prajwal B Mehendarkar
1d343b4069
arg : Including limits file on AIX (#12822) 2025-04-08 14:30:59 +02:00
Sergey Fedorov
f1e3eb4249
common : fix includes in arg.cpp and gemma3-cli.cpp (#12766)
* arg.cpp: add a missing include

* gemma3-cli.cpp: fix cinttypes include
2025-04-05 17:46:00 +02:00
エシュナヴァリシア
c6ff5d2a8d
common: custom hf endpoint support (#12769)
* common: custom hf endpoint support

Add support for custom huggingface endpoints via HF_ENDPOINT environment variable

You can now specify a custom huggingface endpoint using the HF_ENDPOINT environment variable when using the --hf-repo flag, which works similarly to huggingface-cli's endpoint configuration.

Example usage:
HF_ENDPOINT=https://hf-mirror.com/ ./bin/llama-cli --hf-repo Qwen/Qwen1.5-0.5B-Chat-GGUF --hf-file qwen1_5-0_5b-chat-q2_k.gguf -p "The meaning to life and the universe is"

The trailing slash in the URL is optional:
HF_ENDPOINT=https://hf-mirror.com ./bin/llama-cli --hf-repo Qwen/Qwen1.5-0.5B-Chat-GGUF --hf-file qwen1_5-0_5b-chat-q2_k.gguf -p "The meaning to life and the universe is"

* Update common/arg.cpp

readability Improvement

Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>

* Apply suggestions from code review

---------

Co-authored-by: ベアトリーチェ <148695646+MakiSonomura@users.noreply.github.com>
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>
2025-04-05 15:31:42 +02:00
Concedo
103d60ed2c Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	common/common.cpp
#	examples/batched-bench/batched-bench.cpp
#	examples/batched/batched.cpp
#	examples/export-lora/export-lora.cpp
#	examples/gritlm/gritlm.cpp
#	examples/parallel/parallel.cpp
#	examples/passkey/passkey.cpp
#	examples/speculative-simple/speculative-simple.cpp
#	examples/speculative/speculative.cpp
#	ggml/src/ggml-cann/CMakeLists.txt
#	ggml/src/ggml-cann/acl_tensor.cpp
#	ggml/src/ggml-cann/acl_tensor.h
#	ggml/src/ggml-cann/aclnn_ops.cpp
#	ggml/src/ggml-cann/aclnn_ops.h
#	ggml/src/ggml-vulkan/CMakeLists.txt
#	tests/test-arg-parser.cpp
#	tests/test-backend-ops.cpp
2025-04-03 18:57:49 +08:00
Diego Devesa
e0e912f49b
llama : add option to override model tensor buffers (#11397)
* llama : add option to override tensor buffers

* ggml : fix possible underflow in ggml_nbytes
2025-04-02 14:52:01 +02:00
Xuan-Son Nguyen
267c1399f1
common : refactor downloading system, handle mmproj with -hf option (#12694)
* (wip) refactor downloading system [no ci]

* fix all examples

* fix mmproj with -hf

* gemma3: update readme

* only handle mmproj in llava example

* fix multi-shard download

* windows: fix problem with std::min and std::max

* fix 2
2025-04-01 23:44:05 +02:00
Concedo
396875e1c4 update api docs and lite 2025-03-29 15:39:25 +08:00
Piotr
2099a9d5db
server : Support listening on a unix socket (#12613)
* server : Bump cpp-httplib to include AF_UNIX windows support

Signed-off-by: Piotr Stankiewicz <piotr.stankiewicz@docker.com>

* server : Allow running the server example on a unix socket

Signed-off-by: Piotr Stankiewicz <piotr.stankiewicz@docker.com>

---------

Signed-off-by: Piotr Stankiewicz <piotr.stankiewicz@docker.com>
2025-03-27 23:41:04 +01:00
Concedo
5d7c5e9e33 Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	examples/tts/tts.cpp
2025-03-16 15:42:39 +08:00