Concedo
|
39f3d1cf48
|
Merge branch 'master' into concedo_experimental
# Conflicts:
# Makefile
# README.md
# examples/quantize/quantize.cpp
|
2023-05-05 21:34:33 +08:00 |
|
Ivan Stepanov
|
34d9f22f44
|
Wrap exceptions in std::exception to verbose output on exception. (#1316)
|
2023-05-04 18:56:27 +02:00 |
|
Concedo
|
94827172e0
|
Merge branch 'master' into concedo
# Conflicts:
# CMakeLists.txt
# Makefile
# ggml-cuda.cu
# ggml-cuda.h
|
2023-05-02 14:38:31 +08:00 |
|
xloem
|
ea3a0ad6b6
|
llama : update stubs for systems without mmap and mlock (#1266)
Co-authored-by: John Doe <john.doe@example.com>
|
2023-05-01 15:58:51 +03:00 |
|
slaren
|
b925f1f1b0
|
cuBLAS: fall back to pageable memory if pinned alloc fails (#1233)
* cuBLAS: fall back to pageable memory if pinned alloc fails
* cuBLAS: do not use pinned memory if env variable GGML_CUDA_NO_PINNED is set
|
2023-05-01 13:32:22 +02:00 |
|
Concedo
|
0061b90ec6
|
Merge branch 'master' into concedo_experimental
# Conflicts:
# CMakeLists.txt
# Makefile
|
2023-04-30 10:35:02 +08:00 |
|
Georgi Gerganov
|
84ca9c2ecf
|
examples : fix save-load-state + rename llama-util.h
|
2023-04-29 13:48:11 +03:00 |
|