Commit graph

173 commits

Author SHA1 Message Date
Lizonghang
8e9ab45458 fix model bytes counter 2024-12-10 14:57:48 +04:00
Lizonghang
d78fa427e7 add memory copy speed test 2024-12-09 10:07:42 +04:00
Lizonghang
2c2171cebf fix display 2024-12-08 22:57:12 +04:00
Zonghang Li
df813675d0 fix flops count and ram/vram speed test 2024-12-08 10:14:05 +04:00
Lizonghang
cd823546dd llama_profile_device: add arg n_predict 2024-12-06 16:37:25 +04:00
Lizonghang
a46d56cc60 llama_model_n_flops: remove ctxs 2024-12-06 11:31:53 +04:00
Lizonghang
f1c1d1b929 add support for Q5_K and fix byte count for Q6_K 2024-12-06 07:59:45 +04:00
Lizonghang
4fd3b6679e fix n_gpu_layers 2024-12-05 20:28:19 +04:00
Lizonghang
e74967c488 throw error when using async upload, will support later 2024-12-05 14:18:39 +04:00
Lizonghang
6f54a12c7d add gpu support in llama_model_kvcache_size and llama_model_compute_buf_size 2024-11-29 21:06:32 +04:00
Lizonghang
1123c00e45 recover comments in llama_profile_device 2024-11-29 19:07:07 +04:00
Lizonghang
68ecabc8c3 add cpu_read_ram_bw, metal_read_vram_bw, cuda_read_vram_bw 2024-11-29 19:04:53 +04:00
Lizonghang
0f73d12247 decrease compute buf from available memory 2024-11-29 11:15:54 +04:00
Lizonghang
45a1e55eec reduce kv cache from available memory 2024-11-28 20:21:21 +04:00
Lizonghang
9a7bbce7ad fix t_load_us 2024-11-28 15:55:21 +04:00
Lizonghang
740f7f0b95 use multithread disk r/w test 2024-11-27 22:14:17 +04:00
Lizonghang
f7507ec20b fix disk r/w test, add disk access latency, and correct units (GB, GiB) 2024-11-27 21:36:12 +04:00
Zonghang Li
f78c437172 add device_inp_embd_delay test, device_memory_bw test, device_cuda_memory_bw test, 2024-11-26 22:28:02 +04:00
Lizonghang
a7a95b53fe add q80xf32 and count_n_params 2024-11-24 23:11:12 +04:00
Lizonghang
3fe00a16a0 count model flops for f32xf32, f16xf32, q4kxf32, q6kxf32 2024-11-24 13:13:32 +04:00
Lizonghang
a5ba34169a add f32, f16, q4k_f32, q6k_f32 flops test and fix duplicate inp_embd in subgraphs 2024-11-23 21:36:34 +04:00
Zonghang Li
7ee1423006 add model_flops 2024-11-21 20:06:16 +04:00
Lizonghang
477ecf2084 add llama_model_n_flops 2024-11-20 19:40:27 +04:00
Lizonghang
10f6f92c7e add f32, f16, q8, q4k speed test for cuda 2024-11-10 23:41:13 +04:00
Lizonghang
f4260bb346 add device_flops() for cpu, metal, and cuda 2024-11-10 23:11:05 +04:00
Lizonghang
5fae6ac36f add cpu flops test 2024-11-09 20:53:42 +04:00
Lizonghang
2bd4d03aa8 add automatic layer window size assignment workflow 2024-11-08 18:21:03 +04:00
Lizonghang
53cb3a6069 synchronize device info 2024-11-07 22:02:01 +04:00
Lizonghang
ef7fdf70cc add LLAMA_API llama_profile_device 2024-11-07 09:30:39 +04:00
Lizonghang
407c71ae52 add cpu and gpu profile 2024-11-06 20:42:28 +04:00
Lizonghang
976a4c3534 fix cuda support 2024-11-04 11:00:01 +04:00
Lizonghang
9d6a6845ac fix cuda support 2024-11-03 12:02:30 +04:00
Lizonghang
684b2ac05b fix cuda support 2024-11-03 11:19:50 +04:00
Lizonghang
a83f577c63 reset backend_embd and out_embd as input tensors 2024-11-02 20:40:21 +04:00
Lizonghang
7aa771e701 set backend_embd and out_embd on the device of their adjacent nodes 2024-11-02 09:41:40 +04:00
Lizonghang
bfc3f9e185 split input as a separate subgraph 2024-10-30 23:40:13 +04:00
Lizonghang
64291ad70c fix next_sub_gf 2024-10-28 11:03:46 +04:00
Lizonghang
925ea39382 finetune prefetch 2024-10-28 10:58:48 +04:00
Lizonghang
76a7fc7527 support different window sizes 2024-10-26 12:34:14 +04:00
Lizonghang
c97ea10617 add mmap prefetch and unloading 2024-10-25 16:33:56 +04:00
Lizonghang
2a01ff5fb1 init 2024-10-23 09:42:32 +04:00
Diego Devesa
6374743747
ggml : add backend registry / device interfaces to BLAS backend (#9752)
* ggml : add backend registry / device interfaces to BLAS backend

* fix mmap usage when using host buffers
2024-10-07 21:55:08 +02:00
Georgi Gerganov
d5ac8cf2f2
ggml : add metal backend registry / device (#9713)
* ggml : add metal backend registry / device

ggml-ci

* metal : fix names [no ci]

* metal : global registry and device instances

ggml-ci

* cont : alternative initialization of global objects

ggml-ci

* llama : adapt to backend changes

ggml-ci

* fixes

* metal : fix indent

* metal : fix build when MTLGPUFamilyApple3 is not available

ggml-ci

* fix merge

* metal : avoid unnecessary singleton accesses

ggml-ci

* metal : minor fix [no ci]

* metal : g_state -> g_ggml_ctx_dev_main [no ci]

* metal : avoid reference of device context in the backend context

ggml-ci

* metal : minor [no ci]

* metal : fix maxTransferRate check

* metal : remove transfer rate stuff

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-10-07 18:27:51 +03:00
Georgi Gerganov
8c475b97b8
rerank : use [SEP] token instead of [BOS] (#9737)
* rerank : use [SEP] token instead of [BOS]

ggml-ci

* common : sanity check for non-NULL tokens

ggml-ci

* ci : adjust rank score interval

ggml-ci

* ci : add shebang to run.sh

ggml-ci
2024-10-05 15:55:04 +03:00
bandoti
d6fe7abf04
ggml: unify backend logging mechanism (#9709)
* Add scaffolding for ggml logging macros

* Metal backend now uses GGML logging

* Cuda backend now uses GGML logging

* Cann backend now uses GGML logging

* Add enum tag to parameters

* Use C memory allocation funcs

* Fix compile error

* Use GGML_LOG instead of GGML_PRINT

* Rename llama_state to llama_logger_state

* Prevent null format string

* Fix whitespace

* Remove log callbacks from ggml backends

* Remove cuda log statement
2024-10-03 17:39:03 +02:00
Diego Devesa
c83ad6d01e
ggml-backend : add device and backend reg interfaces (#9707)
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2024-10-03 01:49:47 +02:00
Xuan Son Nguyen
a39ab216aa
llama : reduce compile time and binary size (#9712)
* llama : speed up compile time

* fix build

* fix build (2)
2024-10-02 15:49:55 +02:00
Georgi Gerganov
cad341d889
metal : reduce command encoding overhead (#9698)
* metal : reduce command encoding overhead

ggml-ci

* metal : add comments
2024-10-01 16:00:25 +03:00
Georgi Gerganov
a90484c6d9
llama : print correct model type for Llama 3.2 1B and 3B 2024-10-01 11:42:01 +03:00
Georgi Gerganov
f4d2b8846a
llama : add reranking support (#9510)
* py : add XLMRobertaForSequenceClassification [no ci]

* py : fix scalar-tensor conversion [no ci]

* py : fix position embeddings chop [no ci]

* llama : read new cls tensors [no ci]

* llama : add classigication head (wip) [no ci]

* llama : add "rank" pooling type

ggml-ci

* server : add rerank endpoint

ggml-ci

* llama : aboud ggml_repeat during classification

* rerank : cleanup + comments

* server : accept /rerank endpoint in addition to /v1/rerank [no ci]

* embedding : parse special tokens

* jina : support v1 reranker

* vocab : minor style

ggml-ci

* server : initiate tests for later

ggml-ci

* server : add docs

* llama : add comment [no ci]

* llama : fix uninitialized tensors

* ci : add rerank tests

ggml-ci

* add reranking test

* change test data

* Update examples/server/server.cpp

Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>

* add `--reranking` argument

* update server docs

* llama : fix comment [no ci]

ggml-ci

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
2024-09-28 17:42:03 +03:00