Zonghang Li
36f353e374
check env path before calling fio to ensure we can find it
2025-01-28 13:06:08 +04:00
Lizonghang
2934cf3e8e
reserve 200 mib for internal gpu usage
2025-01-27 22:14:12 +04:00
Lizonghang
1e2b934d69
add bounds n[m]<=0 for devices without GPUs
2025-01-27 11:13:09 +04:00
Lizonghang
1ca9a43bd1
keep the output layer weights in shared memory by default
2025-01-25 23:31:43 +04:00
Lizonghang
f3dd5776eb
fix kappa and memory bounds, account for look-up table and input/output layer delay
2025-01-25 22:31:40 +04:00
Lizonghang
9e4ba4f06a
fix w init error
2025-01-24 20:26:18 +04:00
Lizonghang
1c0087e919
rename arg --keep-inp-out-in-metal to --keep-out-in-metal
2025-01-23 23:17:06 +04:00
Lizonghang
5fcf020cfb
use the recommended working set size as metal mem bound, rather than the total physical mem
2025-01-23 23:04:40 +04:00
Lizonghang
92f00303d5
set reserved mem to 100MiB
2025-01-23 16:58:11 +04:00
Lizonghang
78a544d716
add metal mem limit
2025-01-23 16:08:52 +04:00
Lizonghang
facb4ea736
add option --keep-inp-out-in-metal and fix bugs in unmap
2025-01-22 11:15:19 +04:00
Lizonghang
871a27f66a
use user input setup in standalone mode
2025-01-21 21:06:09 +04:00
Lizonghang
c701340719
set minimal disk speed to 500Mbps
2025-01-19 10:09:27 +04:00
Lizonghang
4f5265a78d
fix beta calculation on devices without GPU
2025-01-18 21:32:18 +04:00
Lizonghang
6761ca5358
show device props
2025-01-18 17:25:27 +04:00
Lizonghang
f9d16fbf71
fix manully set layer window and gpu layers
2025-01-18 16:44:26 +04:00
Lizonghang
70bd9db008
fix kappa
2025-01-18 11:51:18 +04:00
Lizonghang
c634c2fcbe
reformat highs log
2025-01-16 09:38:12 +04:00
Lizonghang
94f01993fa
reformat highs log
2025-01-16 09:27:53 +04:00
Zonghang Li
46e99218b4
add arg --cuda-mem
2025-01-16 09:15:34 +04:00
Lizonghang
18c96e8042
add option USE_HIGHS
2025-01-15 20:05:49 +04:00
Zonghang Li
790f702d0c
report k and W
2025-01-15 16:55:21 +04:00
Lizonghang
1e1ba5bb91
add api llama_model_set_n_gpu_layers
2025-01-15 10:47:53 +04:00
Lizonghang
9279a2e3ff
fix error in llama_context_n_gpu_layers
2025-01-15 10:08:41 +04:00
Lizonghang
5d9aadf3d5
use highs to solve the allocation program
2025-01-15 10:04:04 +04:00
Zonghang Li
df813675d0
fix flops count and ram/vram speed test
2024-12-08 10:14:05 +04:00
Lizonghang
cd823546dd
llama_profile_device: add arg n_predict
2024-12-06 16:37:25 +04:00
Lizonghang
68ecabc8c3
add cpu_read_ram_bw, metal_read_vram_bw, cuda_read_vram_bw
2024-11-29 19:04:53 +04:00
Lizonghang
45a1e55eec
reduce kv cache from available memory
2024-11-28 20:21:21 +04:00
Lizonghang
9a7bbce7ad
fix t_load_us
2024-11-28 15:55:21 +04:00
Lizonghang
9cd22177d0
remove arg test_file
2024-11-27 21:34:45 +04:00
Zonghang Li
7ee1423006
add model_flops
2024-11-21 20:06:16 +04:00
Lizonghang
477ecf2084
add llama_model_n_flops
2024-11-20 19:40:27 +04:00
Lizonghang
5fae6ac36f
add cpu flops test
2024-11-09 20:53:42 +04:00
Lizonghang
2bd4d03aa8
add automatic layer window size assignment workflow
2024-11-08 18:21:03 +04:00
Lizonghang
53cb3a6069
synchronize device info
2024-11-07 22:02:01 +04:00
Lizonghang
ef7fdf70cc
add LLAMA_API llama_profile_device
2024-11-07 09:30:39 +04:00
Zonghang Li
b922418cca
convert MB to GB
2024-11-06 20:47:17 +04:00
Lizonghang
407c71ae52
add cpu and gpu profile
2024-11-06 20:42:28 +04:00
Lizonghang
4e1be1065d
add memory speed test
2024-11-06 10:57:30 +04:00
Lizonghang
a7f3d917a1
add device get name
2024-11-05 22:04:14 +04:00
Lizonghang
2d447266e9
add swap capacity test
2024-11-05 21:42:45 +04:00
Lizonghang
9eed6b14bf
add disk read speed test
2024-11-05 21:12:02 +04:00
Lizonghang
9cd66f2145
add profiler
2024-11-05 20:29:09 +04:00
Lizonghang
766ec7862b
test
2024-11-05 17:22:24 +04:00
Lizonghang
76a7fc7527
support different window sizes
2024-10-26 12:34:14 +04:00
Lizonghang
c97ea10617
add mmap prefetch and unloading
2024-10-25 16:33:56 +04:00
Lizonghang
2a01ff5fb1
init
2024-10-23 09:42:32 +04:00
Georgi Gerganov
8c475b97b8
rerank : use [SEP] token instead of [BOS] ( #9737 )
...
* rerank : use [SEP] token instead of [BOS]
ggml-ci
* common : sanity check for non-NULL tokens
ggml-ci
* ci : adjust rank score interval
ggml-ci
* ci : add shebang to run.sh
ggml-ci
2024-10-05 15:55:04 +03:00
matiaslin
faac0bae26
common : ensure llama_batch size does not exceed max size ( #9668 )
...
A crash was observed when the number of tokens added to a batch exceeds
llama_batch size. An assertion in llama_batch_add was added to protect
against llama_batch size overflow.
2024-09-29 15:25:00 +03:00