Lizonghang
f7507ec20b
fix disk r/w test, add disk access latency, and correct units (GB, GiB)
2024-11-27 21:36:12 +04:00
Lizonghang
9cd22177d0
remove arg test_file
2024-11-27 21:34:45 +04:00
Lizonghang
0a91ad3edc
fix cuda compatibility errors
2024-11-26 22:35:58 +04:00
Zonghang Li
f78c437172
add device_inp_embd_delay test, device_memory_bw test, device_cuda_memory_bw test,
2024-11-26 22:28:02 +04:00
Lizonghang
a7a95b53fe
add q80xf32 and count_n_params
2024-11-24 23:11:12 +04:00
Lizonghang
3fe00a16a0
count model flops for f32xf32, f16xf32, q4kxf32, q6kxf32
2024-11-24 13:13:32 +04:00
Lizonghang
a5ba34169a
add f32, f16, q4k_f32, q6k_f32 flops test and fix duplicate inp_embd in subgraphs
2024-11-23 21:36:34 +04:00
Zonghang Li
7ee1423006
add model_flops
2024-11-21 20:06:16 +04:00
Zonghang Li
80f6b72e71
remove device_flops from profiler api
2024-11-21 08:37:57 +04:00
Lizonghang
477ecf2084
add llama_model_n_flops
2024-11-20 19:40:27 +04:00
Lizonghang
10f6f92c7e
add f32, f16, q8, q4k speed test for cuda
2024-11-10 23:41:13 +04:00
Lizonghang
f4260bb346
add device_flops() for cpu, metal, and cuda
2024-11-10 23:11:05 +04:00
Lizonghang
5fae6ac36f
add cpu flops test
2024-11-09 20:53:42 +04:00
Lizonghang
2bd4d03aa8
add automatic layer window size assignment workflow
2024-11-08 18:21:03 +04:00
Lizonghang
53cb3a6069
synchronize device info
2024-11-07 22:02:01 +04:00
Lizonghang
ef7fdf70cc
add LLAMA_API llama_profile_device
2024-11-07 09:30:39 +04:00
Zonghang Li
b922418cca
convert MB to GB
2024-11-06 20:47:17 +04:00
Lizonghang
407c71ae52
add cpu and gpu profile
2024-11-06 20:42:28 +04:00
Lizonghang
4e1be1065d
add memory speed test
2024-11-06 10:57:30 +04:00
Zonghang Li
9a03b52785
fix device get name on linux
2024-11-05 22:07:09 +04:00
Lizonghang
a7f3d917a1
add device get name
2024-11-05 22:04:14 +04:00
Zonghang Li
6657885816
fix swap detect on linux
2024-11-05 21:57:09 +04:00
Lizonghang
2d447266e9
add swap capacity test
2024-11-05 21:42:45 +04:00
Lizonghang
9eed6b14bf
add disk read speed test
2024-11-05 21:12:02 +04:00
Lizonghang
9cd66f2145
add profiler
2024-11-05 20:29:09 +04:00
Lizonghang
766ec7862b
test
2024-11-05 17:22:24 +04:00
Lizonghang
76a7fc7527
support different window sizes
2024-10-26 12:34:14 +04:00
Lizonghang
c97ea10617
add mmap prefetch and unloading
2024-10-25 16:33:56 +04:00
Lizonghang
2a01ff5fb1
init
2024-10-23 09:42:32 +04:00
Georgi Gerganov
8c475b97b8
rerank : use [SEP] token instead of [BOS] ( #9737 )
...
* rerank : use [SEP] token instead of [BOS]
ggml-ci
* common : sanity check for non-NULL tokens
ggml-ci
* ci : adjust rank score interval
ggml-ci
* ci : add shebang to run.sh
ggml-ci
2024-10-05 15:55:04 +03:00
Daniel Kleine
133c7b46b3
Fixed RNG seed docs ( #9723 )
...
* Update README.md
fixed RNG seed info
* changed print format to unsigned
2024-10-04 10:54:44 +02:00
Ruchira Hasaranga
8277a817f1
console : utf-8 fix for windows stdin ( #9690 )
...
* utf-8 fix for windows stdin
* Update common/console.cpp
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-09-30 11:23:42 +03:00
matiaslin
faac0bae26
common : ensure llama_batch size does not exceed max size ( #9668 )
...
A crash was observed when the number of tokens added to a batch exceeds
llama_batch size. An assertion in llama_batch_add was added to protect
against llama_batch size overflow.
2024-09-29 15:25:00 +03:00
Georgi Gerganov
f4d2b8846a
llama : add reranking support ( #9510 )
...
* py : add XLMRobertaForSequenceClassification [no ci]
* py : fix scalar-tensor conversion [no ci]
* py : fix position embeddings chop [no ci]
* llama : read new cls tensors [no ci]
* llama : add classigication head (wip) [no ci]
* llama : add "rank" pooling type
ggml-ci
* server : add rerank endpoint
ggml-ci
* llama : aboud ggml_repeat during classification
* rerank : cleanup + comments
* server : accept /rerank endpoint in addition to /v1/rerank [no ci]
* embedding : parse special tokens
* jina : support v1 reranker
* vocab : minor style
ggml-ci
* server : initiate tests for later
ggml-ci
* server : add docs
* llama : add comment [no ci]
* llama : fix uninitialized tensors
* ci : add rerank tests
ggml-ci
* add reranking test
* change test data
* Update examples/server/server.cpp
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
* add `--reranking` argument
* update server docs
* llama : fix comment [no ci]
ggml-ci
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
2024-09-28 17:42:03 +03:00
Xuan Son Nguyen
afbbfaa537
server : add more env vars, improve gen-docs ( #9635 )
...
* server : add more env vars, improve gen-docs
* update server docs
* LLAMA_ARG_NO_CONTEXT_SHIFT
2024-09-25 14:05:13 +02:00
Georgi Gerganov
cea1486ecf
log : add CONT level for continuing previous log entry ( #9610 )
2024-09-24 10:15:35 +03:00
Georgi Gerganov
b0f27361f3
sampling : avoid expensive softmax during greedy sampling ( #9605 )
...
* sampling : avoid expensive softmax during greedy sampling
ggml-ci
* speculative : fix default RNG seed + set sparams.n_probs
* Update tests/test-sampling.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* sampling : add clarifying comment [no ci]
---------
Co-authored-by: slaren <slarengh@gmail.com>
2024-09-24 09:03:17 +03:00
Xuan Son Nguyen
0b3bf966f4
server : add --no-context-shift option ( #9607 )
...
* server : add --no-context-shift option
* small fix
* Update examples/server/tests/features/embeddings.feature
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* tests : minor fix
* revert usage of GGML_ASSERT
* update server documentation
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-09-23 22:23:54 +02:00
Bert Wagner
8b836ae731
arg : add env variable for parallel ( #9513 )
...
* add env variable for parallel
* Update README.md with env: LLAMA_ARG_N_PARALLEL
2024-09-17 16:35:38 +03:00
Vinesh Janarthanan
441b72b91f
main : option to disable context shift ( #9484 )
...
* added cli arg to disable context shift
* reverted precommit
* updated README.md for main
* white space
* allow disabling context shift in the server
* Update common/arg.cpp
no-context-shift only works for main example
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* added server example to --no-context-shift args
* removed server changes
* white space
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-09-16 09:20:01 +03:00
Georgi Gerganov
6262d13e0b
common : reimplement logging ( #9418 )
...
https://github.com/ggerganov/llama.cpp/pull/9418
2024-09-15 20:46:12 +03:00
Georgi Gerganov
0abc6a2c25
llama : llama_perf + option to disable timings during decode ( #9355 )
...
* llama : llama_perf + option to disable timings during decode
ggml-ci
* common : add llama_arg
* Update src/llama.cpp
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
* perf : separate functions in the API
ggml-ci
* perf : safer pointer handling + naming update
ggml-ci
* minor : better local var name
* perf : abort on invalid sampler pointer
ggml-ci
---------
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
2024-09-13 09:53:38 +03:00
Ahmad Tameem
2b00fa7997
riscv : modify Makefile and add a RISCV_VECT to print log info ( #9442 )
...
- Added ggml_cpu_has_riscv_v() in GGML to print system info in log
- Modified Makefile to only use flag when cross compiling for RISC-V
2024-09-12 14:24:31 +03:00
Farbod Bijary
67155ab7f5
feat: Implements retrying logic for downloading models using --model-url flag ( #9255 )
...
* feat: Implements retrying logic for downloading models using --model-url flag
* Update common/common.cpp
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
* Update common/common.cpp
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
* apply comments
* implements a retry function to avoid duplication
* fix editorconfig
* change function name
---------
Co-authored-by: farbod <farbod.bjary82@gmail.com>
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2024-09-11 11:22:37 +02:00
Xuan Son Nguyen
6cd4e03444
arg : bring back missing ifdef ( #9411 )
...
* arg : bring back missing ifdef
* replace with llama_supports_gpu_offload
2024-09-10 22:41:29 +02:00
matteo
8d300bd35f
enable --special arg for llama-server ( #9419 )
...
Co-authored-by: matteo serva <matteo.serva@gmail.com>
2024-09-10 22:40:59 +02:00
slaren
49006c67b4
llama : move random seed generation to the samplers ( #9398 )
...
* llama_sampler_penalties : clamp penalty_last_n to zero
2024-09-10 18:04:25 +02:00
Xuan Son Nguyen
bfe76d4a17
common : move arg parser code to arg.cpp
( #9388 )
...
* common : move arg parser to arg.cpp
* better categorize args
* add cmake
* missing climits
* missing cstdarg
* common : more explicit includes
* fix build
* refactor gpt_params_parse
* update server readme
* fix test
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-09-09 23:36:09 +02:00
Xuan Son Nguyen
3f7ccfd649
common : bring back missing args, add env var duplication check ( #9375 )
...
* common : bring back missing args
* move duplication check to test-arg-parser
* add check for duplicated env var
* correct default values
2024-09-08 18:08:55 +02:00
slaren
a249843d89
common : restore --n-gpu-layers ( #9371 )
2024-09-08 16:44:42 +02:00