Lizonghang
871a27f66a
use user input setup in standalone mode
2025-01-21 21:06:09 +04:00
Lizonghang
c701340719
set minimal disk speed to 500Mbps
2025-01-19 10:09:27 +04:00
Lizonghang
4f5265a78d
fix beta calculation on devices without GPU
2025-01-18 21:32:18 +04:00
Lizonghang
6761ca5358
show device props
2025-01-18 17:25:27 +04:00
Lizonghang
f9d16fbf71
fix manully set layer window and gpu layers
2025-01-18 16:44:26 +04:00
Lizonghang
70bd9db008
fix kappa
2025-01-18 11:51:18 +04:00
Lizonghang
c634c2fcbe
reformat highs log
2025-01-16 09:38:12 +04:00
Lizonghang
94f01993fa
reformat highs log
2025-01-16 09:27:53 +04:00
Zonghang Li
46e99218b4
add arg --cuda-mem
2025-01-16 09:15:34 +04:00
Lizonghang
18c96e8042
add option USE_HIGHS
2025-01-15 20:05:49 +04:00
Zonghang Li
790f702d0c
report k and W
2025-01-15 16:55:21 +04:00
Lizonghang
1e1ba5bb91
add api llama_model_set_n_gpu_layers
2025-01-15 10:47:53 +04:00
Lizonghang
9279a2e3ff
fix error in llama_context_n_gpu_layers
2025-01-15 10:08:41 +04:00
Lizonghang
5d9aadf3d5
use highs to solve the allocation program
2025-01-15 10:04:04 +04:00
Zonghang Li
df813675d0
fix flops count and ram/vram speed test
2024-12-08 10:14:05 +04:00
Lizonghang
cd823546dd
llama_profile_device: add arg n_predict
2024-12-06 16:37:25 +04:00
Lizonghang
68ecabc8c3
add cpu_read_ram_bw, metal_read_vram_bw, cuda_read_vram_bw
2024-11-29 19:04:53 +04:00
Lizonghang
45a1e55eec
reduce kv cache from available memory
2024-11-28 20:21:21 +04:00
Lizonghang
9a7bbce7ad
fix t_load_us
2024-11-28 15:55:21 +04:00
Lizonghang
9cd22177d0
remove arg test_file
2024-11-27 21:34:45 +04:00
Zonghang Li
7ee1423006
add model_flops
2024-11-21 20:06:16 +04:00
Lizonghang
477ecf2084
add llama_model_n_flops
2024-11-20 19:40:27 +04:00
Lizonghang
5fae6ac36f
add cpu flops test
2024-11-09 20:53:42 +04:00
Lizonghang
2bd4d03aa8
add automatic layer window size assignment workflow
2024-11-08 18:21:03 +04:00
Lizonghang
53cb3a6069
synchronize device info
2024-11-07 22:02:01 +04:00
Lizonghang
ef7fdf70cc
add LLAMA_API llama_profile_device
2024-11-07 09:30:39 +04:00
Zonghang Li
b922418cca
convert MB to GB
2024-11-06 20:47:17 +04:00
Lizonghang
407c71ae52
add cpu and gpu profile
2024-11-06 20:42:28 +04:00
Lizonghang
4e1be1065d
add memory speed test
2024-11-06 10:57:30 +04:00
Lizonghang
a7f3d917a1
add device get name
2024-11-05 22:04:14 +04:00
Lizonghang
2d447266e9
add swap capacity test
2024-11-05 21:42:45 +04:00
Lizonghang
9eed6b14bf
add disk read speed test
2024-11-05 21:12:02 +04:00
Lizonghang
9cd66f2145
add profiler
2024-11-05 20:29:09 +04:00
Lizonghang
766ec7862b
test
2024-11-05 17:22:24 +04:00
Lizonghang
76a7fc7527
support different window sizes
2024-10-26 12:34:14 +04:00
Lizonghang
c97ea10617
add mmap prefetch and unloading
2024-10-25 16:33:56 +04:00
Lizonghang
2a01ff5fb1
init
2024-10-23 09:42:32 +04:00
Georgi Gerganov
8c475b97b8
rerank : use [SEP] token instead of [BOS] ( #9737 )
...
* rerank : use [SEP] token instead of [BOS]
ggml-ci
* common : sanity check for non-NULL tokens
ggml-ci
* ci : adjust rank score interval
ggml-ci
* ci : add shebang to run.sh
ggml-ci
2024-10-05 15:55:04 +03:00
matiaslin
faac0bae26
common : ensure llama_batch size does not exceed max size ( #9668 )
...
A crash was observed when the number of tokens added to a batch exceeds
llama_batch size. An assertion in llama_batch_add was added to protect
against llama_batch size overflow.
2024-09-29 15:25:00 +03:00
Georgi Gerganov
f4d2b8846a
llama : add reranking support ( #9510 )
...
* py : add XLMRobertaForSequenceClassification [no ci]
* py : fix scalar-tensor conversion [no ci]
* py : fix position embeddings chop [no ci]
* llama : read new cls tensors [no ci]
* llama : add classigication head (wip) [no ci]
* llama : add "rank" pooling type
ggml-ci
* server : add rerank endpoint
ggml-ci
* llama : aboud ggml_repeat during classification
* rerank : cleanup + comments
* server : accept /rerank endpoint in addition to /v1/rerank [no ci]
* embedding : parse special tokens
* jina : support v1 reranker
* vocab : minor style
ggml-ci
* server : initiate tests for later
ggml-ci
* server : add docs
* llama : add comment [no ci]
* llama : fix uninitialized tensors
* ci : add rerank tests
ggml-ci
* add reranking test
* change test data
* Update examples/server/server.cpp
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
* add `--reranking` argument
* update server docs
* llama : fix comment [no ci]
ggml-ci
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
2024-09-28 17:42:03 +03:00
Georgi Gerganov
6262d13e0b
common : reimplement logging ( #9418 )
...
https://github.com/ggerganov/llama.cpp/pull/9418
2024-09-15 20:46:12 +03:00
Georgi Gerganov
0abc6a2c25
llama : llama_perf + option to disable timings during decode ( #9355 )
...
* llama : llama_perf + option to disable timings during decode
ggml-ci
* common : add llama_arg
* Update src/llama.cpp
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
* perf : separate functions in the API
ggml-ci
* perf : safer pointer handling + naming update
ggml-ci
* minor : better local var name
* perf : abort on invalid sampler pointer
ggml-ci
---------
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
2024-09-13 09:53:38 +03:00
Ahmad Tameem
2b00fa7997
riscv : modify Makefile and add a RISCV_VECT to print log info ( #9442 )
...
- Added ggml_cpu_has_riscv_v() in GGML to print system info in log
- Modified Makefile to only use flag when cross compiling for RISC-V
2024-09-12 14:24:31 +03:00
Farbod Bijary
67155ab7f5
feat: Implements retrying logic for downloading models using --model-url flag ( #9255 )
...
* feat: Implements retrying logic for downloading models using --model-url flag
* Update common/common.cpp
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
* Update common/common.cpp
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
* apply comments
* implements a retry function to avoid duplication
* fix editorconfig
* change function name
---------
Co-authored-by: farbod <farbod.bjary82@gmail.com>
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2024-09-11 11:22:37 +02:00
Xuan Son Nguyen
6cd4e03444
arg : bring back missing ifdef ( #9411 )
...
* arg : bring back missing ifdef
* replace with llama_supports_gpu_offload
2024-09-10 22:41:29 +02:00
Xuan Son Nguyen
bfe76d4a17
common : move arg parser code to arg.cpp
( #9388 )
...
* common : move arg parser to arg.cpp
* better categorize args
* add cmake
* missing climits
* missing cstdarg
* common : more explicit includes
* fix build
* refactor gpt_params_parse
* update server readme
* fix test
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-09-09 23:36:09 +02:00
Xuan Son Nguyen
3f7ccfd649
common : bring back missing args, add env var duplication check ( #9375 )
...
* common : bring back missing args
* move duplication check to test-arg-parser
* add check for duplicated env var
* correct default values
2024-09-08 18:08:55 +02:00
slaren
a249843d89
common : restore --n-gpu-layers ( #9371 )
2024-09-08 16:44:42 +02:00
Xuan Son Nguyen
00b02bb249
imatrix : fix arg parser for imatrix ( #9366 )
...
* imatrix : fix arg parser
* beautify printing first arg
2024-09-08 12:12:17 +02:00
Georgi Gerganov
faf69d4237
llama : sanitize invalid tokens ( #9357 )
...
* common : do not add null tokens during warmup
ggml-ci
* llama : check that the input tokens are valid
ggml-ci
* tests : fix batch size of bert model
ggml-ci
2024-09-08 00:33:13 +03:00