Zonghang Li
0b9720c222
modify default max nodes back to 8192
2025-01-04 19:18:59 +04:00
Zonghang Li
e543fcd55f
fix buf_compute_meta size
2025-01-04 19:16:38 +04:00
Zonghang Li
b2f806a572
increase default max nodes from 8192 to 16384
2025-01-04 11:08:35 +04:00
Zonghang Li
b712ad3044
fix flops test error
2024-12-31 09:37:46 +04:00
Lizonghang
fa31ca8e35
add os detect
2024-12-30 09:13:12 +04:00
Lizonghang
a7ec685eda
add memcpy speed test
2024-12-29 16:19:08 +04:00
Lizonghang
b642d70188
fix swappable mem in termux
2024-12-12 15:15:16 +04:00
Lizonghang
8e9ab45458
fix model bytes counter
2024-12-10 14:57:48 +04:00
Lizonghang
d78fa427e7
add memory copy speed test
2024-12-09 10:07:42 +04:00
Lizonghang
2c2171cebf
fix display
2024-12-08 22:57:12 +04:00
Zonghang Li
df813675d0
fix flops count and ram/vram speed test
2024-12-08 10:14:05 +04:00
Lizonghang
cd823546dd
llama_profile_device: add arg n_predict
2024-12-06 16:37:25 +04:00
Lizonghang
a46d56cc60
llama_model_n_flops: remove ctxs
2024-12-06 11:31:53 +04:00
Lizonghang
f1c1d1b929
add support for Q5_K and fix byte count for Q6_K
2024-12-06 07:59:45 +04:00
Lizonghang
4fd3b6679e
fix n_gpu_layers
2024-12-05 20:28:19 +04:00
Lizonghang
e74967c488
throw error when using async upload, will support later
2024-12-05 14:18:39 +04:00
Lizonghang
6f54a12c7d
add gpu support in llama_model_kvcache_size and llama_model_compute_buf_size
2024-11-29 21:06:32 +04:00
Lizonghang
1123c00e45
recover comments in llama_profile_device
2024-11-29 19:07:07 +04:00
Lizonghang
68ecabc8c3
add cpu_read_ram_bw, metal_read_vram_bw, cuda_read_vram_bw
2024-11-29 19:04:53 +04:00
Lizonghang
0f73d12247
decrease compute buf from available memory
2024-11-29 11:15:54 +04:00
Lizonghang
45a1e55eec
reduce kv cache from available memory
2024-11-28 20:21:21 +04:00
Lizonghang
9a7bbce7ad
fix t_load_us
2024-11-28 15:55:21 +04:00
Lizonghang
740f7f0b95
use multithread disk r/w test
2024-11-27 22:14:17 +04:00
Lizonghang
f7507ec20b
fix disk r/w test, add disk access latency, and correct units (GB, GiB)
2024-11-27 21:36:12 +04:00
Zonghang Li
f78c437172
add device_inp_embd_delay test, device_memory_bw test, device_cuda_memory_bw test,
2024-11-26 22:28:02 +04:00
Lizonghang
a7a95b53fe
add q80xf32 and count_n_params
2024-11-24 23:11:12 +04:00
Lizonghang
3fe00a16a0
count model flops for f32xf32, f16xf32, q4kxf32, q6kxf32
2024-11-24 13:13:32 +04:00
Lizonghang
a5ba34169a
add f32, f16, q4k_f32, q6k_f32 flops test and fix duplicate inp_embd in subgraphs
2024-11-23 21:36:34 +04:00
Zonghang Li
7ee1423006
add model_flops
2024-11-21 20:06:16 +04:00
Lizonghang
477ecf2084
add llama_model_n_flops
2024-11-20 19:40:27 +04:00
Lizonghang
10f6f92c7e
add f32, f16, q8, q4k speed test for cuda
2024-11-10 23:41:13 +04:00
Lizonghang
f4260bb346
add device_flops() for cpu, metal, and cuda
2024-11-10 23:11:05 +04:00
Lizonghang
5fae6ac36f
add cpu flops test
2024-11-09 20:53:42 +04:00
Lizonghang
2bd4d03aa8
add automatic layer window size assignment workflow
2024-11-08 18:21:03 +04:00
Lizonghang
53cb3a6069
synchronize device info
2024-11-07 22:02:01 +04:00
Lizonghang
ef7fdf70cc
add LLAMA_API llama_profile_device
2024-11-07 09:30:39 +04:00
Lizonghang
407c71ae52
add cpu and gpu profile
2024-11-06 20:42:28 +04:00
Lizonghang
976a4c3534
fix cuda support
2024-11-04 11:00:01 +04:00
Lizonghang
9d6a6845ac
fix cuda support
2024-11-03 12:02:30 +04:00
Lizonghang
684b2ac05b
fix cuda support
2024-11-03 11:19:50 +04:00
Lizonghang
a83f577c63
reset backend_embd and out_embd as input tensors
2024-11-02 20:40:21 +04:00
Lizonghang
7aa771e701
set backend_embd and out_embd on the device of their adjacent nodes
2024-11-02 09:41:40 +04:00
Lizonghang
bfc3f9e185
split input as a separate subgraph
2024-10-30 23:40:13 +04:00
Lizonghang
64291ad70c
fix next_sub_gf
2024-10-28 11:03:46 +04:00
Lizonghang
925ea39382
finetune prefetch
2024-10-28 10:58:48 +04:00
Lizonghang
76a7fc7527
support different window sizes
2024-10-26 12:34:14 +04:00
Lizonghang
c97ea10617
add mmap prefetch and unloading
2024-10-25 16:33:56 +04:00
Lizonghang
2a01ff5fb1
init
2024-10-23 09:42:32 +04:00
Diego Devesa
6374743747
ggml : add backend registry / device interfaces to BLAS backend ( #9752 )
...
* ggml : add backend registry / device interfaces to BLAS backend
* fix mmap usage when using host buffers
2024-10-07 21:55:08 +02:00
Georgi Gerganov
d5ac8cf2f2
ggml : add metal backend registry / device ( #9713 )
...
* ggml : add metal backend registry / device
ggml-ci
* metal : fix names [no ci]
* metal : global registry and device instances
ggml-ci
* cont : alternative initialization of global objects
ggml-ci
* llama : adapt to backend changes
ggml-ci
* fixes
* metal : fix indent
* metal : fix build when MTLGPUFamilyApple3 is not available
ggml-ci
* fix merge
* metal : avoid unnecessary singleton accesses
ggml-ci
* metal : minor fix [no ci]
* metal : g_state -> g_ggml_ctx_dev_main [no ci]
* metal : avoid reference of device context in the backend context
ggml-ci
* metal : minor [no ci]
* metal : fix maxTransferRate check
* metal : remove transfer rate stuff
---------
Co-authored-by: slaren <slarengh@gmail.com>
2024-10-07 18:27:51 +03:00