Commit graph

7620 commits

Author SHA1 Message Date
Concedo
dca7ab5d9e fixed tools compile error 2025-01-04 18:45:21 +08:00
Concedo
1559d4d2fb fixed defective websearch 2025-01-04 16:47:38 +08:00
Gilad S.
c31fc8b966
fix: Vulkan shader gen binary path (#11037) 2025-01-04 09:17:31 +01:00
Concedo
b37354bf73 upgrade to upload-artifact v4 2025-01-04 13:32:49 +08:00
Concedo
e07e73aeb4 updated lite 2025-01-04 10:47:48 +08:00
Concedo
b4dc29f425 kobo cheats death again (+1 squashed commits)
Squashed commits:

[708e2429] kobo cheats death again
2025-01-04 01:06:41 +08:00
Concedo
f9f1585a7f broken merge - kcpp changes will be applied above this commit for better tracking. 2025-01-03 23:49:17 +08:00
Molly Sophia
4b0c638b9a
common : disable KV cache shifting automatically for unsupported models (#11053)
* Disable KV cache shifting automatically for unsupported models

instead of exiting directly

Signed-off-by: Molly Sophia <mollysophia379@gmail.com>

* Update common/common.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-01-03 14:13:18 +02:00
Concedo
1012281320 updated colab 2025-01-03 18:02:02 +08:00
Georgi Gerganov
e7da954ecc
metal : avoid uint (#11019) 2025-01-03 11:26:14 +02:00
Georgi Gerganov
f66f582927
llama : refactor src/llama.cpp (#10902)
* llama : scatter llama.cpp into multiple modules (wip)

* llama : control-vector -> adapter

* llama : arch

* llama : mmap

ggml-ci

* ci : remove BUILD_SHARED_LIBS=OFF

ggml-ci

* llama : arch (cont)

ggml-ci

* llama : chat

ggml-ci

* llama : model

ggml-ci

* llama : hparams

ggml-ci

* llama : adapter

ggml-ci

* examples : fix

ggml-ci

* rebase

ggml-ci

* minor

* llama : kv cache

ggml-ci

* llama : impl

ggml-ci

* llama : batch

ggml-ci

* cont

ggml-ci

* llama : context

ggml-ci

* minor

* llama : context (cont)

ggml-ci

* llama : model loader

ggml-ci

* common : update lora

ggml-ci

* llama : quant

ggml-ci

* llama : quant (cont)

ggml-ci

* minor [no ci]
2025-01-03 10:18:53 +02:00
Concedo
911da8765f Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	README.md
#	examples/llama.android/llama/src/main/cpp/llama-android.cpp
#	examples/run/run.cpp
#	examples/server/README.md
#	examples/server/bench/README.md
#	examples/server/tests/README.md
#	ggml/src/CMakeLists.txt
#	ggml/src/ggml-cpu/CMakeLists.txt
#	tests/test-backend-ops.cpp
2025-01-03 11:56:20 +08:00
Concedo
22fd7a0439 fix make tools for linux 2025-01-03 11:39:23 +08:00
Pierrick Hymbert
2f0ee84b9b
server: bench: minor fixes (#10765)
* server/bench:
- support openAI streaming standard output with [DONE]\n\n
- export k6 raw results in csv
- fix too many tcp idle connection in tcp_wait
- add metric time to emit first token

* server/bench:
- fix when prometheus not started
- wait for server to be ready before starting bench
2025-01-02 18:06:12 +01:00
Xuan Son Nguyen
0da5d86026
server : allow using LoRA adapters per-request (#10994)
* slot.can_batch_with

* lora per request

* test: force disable cache prompt

* move can_batch_with check

* fix condition

* add slow test with llama 8b

* update docs

* move lora change task to queue

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* lora_base

* remove redundant check

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-01-02 15:05:18 +01:00
Benson Wong
a45433ba20
readme : add llama-swap to infrastructure section (#11032)
* list llama-swap under tools in README

* readme: add llama-swap to Infrastructure
2025-01-02 09:14:54 +02:00
Srihari-mcw
0827b2c1da
ggml : fixes for AVXVNNI instruction set with MSVC and Clang (#11027)
* Fixes for clang AVX VNNI

* enable AVX VNNI and alder lake build for MSVC

* Apply suggestions from code review

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-12-31 15:23:33 +01:00
Xuan Son Nguyen
45095a61bf
server : clean up built-in template detection (#11026)
* server : clean up built-in template detection

* fix compilation

* add chat template test

* fix condition
2024-12-31 15:22:01 +01:00
Xuan Son Nguyen
5896c65232
server : add OAI compat for /v1/completions (#10974)
* server : add OAI compat for /v1/completions

* add test

* add docs

* better docs
2024-12-31 12:34:13 +01:00
ymcki
bc7b1f8632
convert : fix Llama-3_1-Nemotron-51B rope settings (#11008)
* conflict resolution

* move comments after bracket to its own line

* DeciLMCausalModel now reads rope_theta from config.json properly
2024-12-31 13:04:48 +02:00
Peter
6e1531aca5
common, examples, ggml : fix MSYS2 GCC compiler errors and warnings when building with LLAMA_CURL=ON and GGML_OPENCL=ON (#11013)
In common/common.cpp:
* Convert usage of stat() function call to check if file exists to standard library function std::filesystem::exists (error unable to match to correct function signature)
* Additional conditions to check if PATH_MAX is already defined in WIN32 environment (warning it is already defined in MSYS2)

In examples/run/run.cpp:
* Add io.h header inclusion (error cannot find function _get_osfhandle)
* Change initialisers for OVERLAPPED to empty struct (warning about uninitialised members)
* Add initialiser for hFile (warning it may be uninitialised)
* Add cast for curl_off_t percentage value to long int in generate_progress_prefix function (warning that curl_off_t is long long int)

In ggml/src/ggml-opencl/ggml-opencl.cpp:
* Initialise certain declared cl_mem variables to nullptr for greater safety (warning about B_d variable possibly used unassigned)
2024-12-31 01:46:06 +01:00
Jeff Bolz
716bd6dec3
vulkan: optimize mul_mat for small values of N (#10991)
Make the mul_mat_vec shaders support N>1 (as a spec constant, NUM_COLS) where
the batch_strides are overloaded to hold the row strides. Put the loads from the
B matrix in the innermost loop because it should cache better.

Share some code for reducing the result values to memory in mul_mat_vec_base.
2024-12-30 18:27:11 +01:00
Concedo
8de44d1e41 refactored some outputs 2024-12-30 22:30:27 +08:00
ag2s20150909
c250ecb315
android : fix llama_batch free (#11014) 2024-12-30 14:35:13 +02:00
Concedo
5eb314a04b websearch length limits and caching 2024-12-30 18:30:54 +08:00
Concedo
3fea11675d websearch integrated into lite, changed to POST 2024-12-30 17:30:41 +08:00
Concedo
6026501ed2 websearch functional 2024-12-30 12:01:51 +08:00
Concedo
709dab6289 improved websearch endpoint 2024-12-29 19:39:16 +08:00
Jeff Bolz
a813badbbd
vulkan: im2col and matmul optimizations for stable diffusion (#10942)
* tests: Add im2col perf tests

* vulkan: optimize im2col, more elements per thread

* vulkan: increase small tile size for NV_coopmat2

* vulkan: change im2col to 512 elements per workgroup
2024-12-29 10:16:34 +01:00
Concedo
5451a8e8a9 updated lite 2024-12-29 17:04:29 +08:00
Jeff Bolz
fdd2188912
vulkan: Use push constant offset to handle misaligned descriptors (#10987) 2024-12-29 09:35:11 +01:00
Concedo
2de1975ca2 improve websearch api 2024-12-28 23:36:40 +08:00
Isaac McFadyen
f865ea149d
server: added more docs for response_fields field (#10995) 2024-12-28 16:09:19 +01:00
Alexey Parfenov
16cdce7b68
server : fix token duplication when streaming with stop strings (#10997) 2024-12-28 16:08:54 +01:00
Concedo
baaecd1c65 added a basic websearch proxy 2024-12-28 19:07:00 +08:00
Concedo
7c671f289e Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.github/workflows/docker.yml
#	examples/cvector-generator/mean.hpp
#	examples/cvector-generator/pca.hpp
#	examples/export-lora/export-lora.cpp
#	examples/rpc/rpc-server.cpp
#	examples/run/README.md
#	examples/run/run.cpp
#	examples/server/CMakeLists.txt
#	examples/server/README.md
#	ggml/src/CMakeLists.txt
#	ggml/src/ggml-cpu/CMakeLists.txt
#	ggml/src/ggml-vulkan/ggml-vulkan.cpp
#	scripts/compare-llama-bench.py
#	scripts/hf.sh
#	tests/test-chat-template.cpp
2024-12-28 12:48:34 +08:00
Concedo
29afdb7c90 minor linting 2024-12-28 12:21:35 +08:00
kallewoof
23ec550835
PoC: add chat template heuristics (#1283)
* PoC: add chat template heuristics

The fallback chat template adapter of Vicuna is not ideal in some cases (e.g. a test against a sub-portion of the BBC news classification task on Kaggle gave an 82% accuracy with Vicuna and 88% with the official ChatML format for a q4_k_m Qwen 2.5 3B-Instruct gguf).

This PR adds a proof of concept simple heuristic which looks at the chat template and upgrades the adapter when it is able to.

* gemma 2 heuristic

* Phi 4, Llama 3.x heuristics

* better qwen vs generic heuristic

* cleanup

* mistral (generic) heuristic

* fix sys msg for mistral

* phi 3.5

* mistral v3

* cohere (aya expanse 32b based)

* only derive from chat template if AutoGuess

* add notes about alpaca fallbacks

* added AutoGuess.json dummy

* add mistral v7

* switch to using a json list with search strings
2024-12-28 12:15:23 +08:00
Eve
d79d8f39b4
vulkan: multi-row k quants (#10846)
* multi row k quant shaders!

* better row selection

* more row choices

* readjust row selection

* rm_kq=2 by default
2024-12-26 16:54:44 +01:00
Peter
d283d02bf2
examples, ggml : fix GCC compiler warnings (#10983)
Warning types fixed (observed under MSYS2 GCC 14.2.0):
* format '%ld' expects argument of type 'long int', but argument has type 'size_t'
* llama.cpp/ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp:81:46: warning: missing initializer for member '_STARTUPINFOA::lpDesktop' [-Wmissing-field-initializers]  (emitted for all struct field except first)
2024-12-26 14:59:11 +01:00
Nexes the Elder
3e6ef8e0ef
Probable typo (#1287) 2024-12-26 11:51:04 +08:00
Concedo
263d49d0d5 qkv warning 2024-12-25 15:15:38 +08:00
Reza Kakhki
9ba399dfa7
server : add support for "encoding_format": "base64" to the */embeddings endpoints (#10967)
* add support for base64

* fix base64 test

* improve test

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2024-12-24 21:33:04 +01:00
Djip007
2cd43f4900
ggml : more perfo with llamafile tinyblas on x86_64 (#10714)
* more perfo with llamafile tinyblas on x86_64.

- add bf16 suport
- change dispache strategie (thanks:
https://github.com/ikawrakow/ik_llama.cpp/pull/71 )
- reduce memory bandwidth

simple tinyblas dispache and more cache freindly

* tinyblas dynamic dispaching

* sgemm: add M blocs.

* - git 2.47 use short id of len 9.
- show-progress is not part of GNU Wget2

* remove not stable test
2024-12-24 18:54:49 +01:00
NeverLucky
09fe2e7613
server: allow filtering llama server response fields (#10940)
* llama_server_response_fields

* llama_server_response_fields_fix_issues

* params fixes

* fix

* clarify docs

* change to "response_fields"

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2024-12-24 17:39:49 +01:00
Georgi Gerganov
30caac3a68
llama : the WPM vocabs use the CLS token as BOS (#10930)
* llama : the WPM vocabs use the CLS token as BOS

ggml-ci

* llama : add comment
2024-12-24 09:44:20 +02:00
Diego Devesa
60cfa728e2
ggml : use wstring for backend search paths (#10960)
ggml-ci
2024-12-24 04:05:27 +01:00
Diego Devesa
3327bb0f8d
ggml : fix arm enabled features check (#10961) 2024-12-24 04:05:17 +01:00
Diego Devesa
32d6ee6385
ggml : fix const usage in SSE path (#10962) 2024-12-23 20:25:52 +01:00
Concedo
2a890ec25a Breaking change: unify the windows and linux build flags.
To do a full build on windows you now need LLAMA_PORTABLE=1 LLAMA_VULKAN=1 LLAMA_CLBLAST=1
2024-12-23 22:35:54 +08:00