Concedo
282a647689
Merge commit ' 467576b6cc' into concedo_experimental
...
# Conflicts:
# .gitignore
# Makefile
# README.md
# common/common.h
# docs/build.md
# examples/infill/infill.cpp
# examples/perplexity/perplexity.cpp
# examples/server/README.md
# ggml/CMakeLists.txt
# ggml/src/CMakeLists.txt
# ggml/src/ggml-cuda/CMakeLists.txt
# scripts/sync-ggml-am.sh
# scripts/sync-ggml.sh
# tests/CMakeLists.txt
# tests/test-backend-ops.cpp
# tests/test-opt.cpp
# tests/test-quantize-perf.cpp
2024-11-21 16:05:21 +08:00
Concedo
272828cab0
tweaks to chat template
2024-11-21 11:10:30 +08:00
kallewoof
547ab2aebb
API: add /props route ( #1222 )
...
* API: add an /extra/chat_template route
A lot of manual tweaking is done when swapping between models. We can automate or make better assumptions about some of them by having more information, such as chat template. This PR adds an endpoint /extra/chat_template which returns the model chat template string as is in a 'chat_template' key. The front end can then use this to derive the proper templates or use it as is, or at least warn the user when they are trying to use e.g. a Mistral preset with a Llama 3.1 model.
* switch to pre-established /props endpoint for chat template
* bug-fix (upstream): one-off in string juggling
2024-11-21 10:58:32 +08:00
Concedo
8ab3eb89a8
updated lite
2024-11-21 10:43:48 +08:00
Concedo
a439dcb38e
multiplayer error handling
2024-11-19 23:31:48 +08:00
Concedo
1b663e10c8
first functional multiplayer
2024-11-19 22:49:28 +08:00
Concedo
8db8154a25
Merge branch 'concedo_experimental' of https://github.com/LostRuins/koboldcpp into concedo_experimental
2024-11-19 18:09:29 +08:00
Concedo
14cbd07eaa
more wip multiplayer
2024-11-19 18:09:26 +08:00
pandora
a548108dd2
Create Mistral-V7.json ( #1224 )
2024-11-19 10:45:50 +08:00
Concedo
ee586b9a9d
fixed vulkan
2024-11-19 01:26:31 +08:00
Concedo
d5feaa8a3d
fixed old mixtral models, but at what cost? was it worth it?
2024-11-19 01:01:25 +08:00
GPTLocalhost (Word Add-in)
aacb6c3a70
Add GPTLocalhost as third-party resource ( #1221 )
2024-11-18 10:17:06 +08:00
Concedo
39124828ab
wip multiplayer
2024-11-17 23:29:25 +08:00
Johannes Gäßler
467576b6cc
CMake: default to -arch=native for CUDA build ( #10320 )
2024-11-17 09:06:34 +01:00
Diego Devesa
eda7e1d4f5
ggml : fix possible buffer use after free in sched reserve ( #9930 )
2024-11-17 08:31:17 +02:00
Georgi Gerganov
24203e9dd7
ggml : inttypes.h -> cinttypes ( #0 )
...
ggml-ci
2024-11-17 08:30:29 +02:00
Georgi Gerganov
5d9e59979c
ggml : adapt AMX to tensor->grad removal ( #0 )
...
ggml-ci
2024-11-17 08:30:29 +02:00
Georgi Gerganov
a4200cafad
make : add ggml-opt ( #0 )
...
ggml-ci
2024-11-17 08:30:29 +02:00
Georgi Gerganov
84274a10c3
tests : remove test-grad0
2024-11-17 08:30:29 +02:00
Georgi Gerganov
68fcb4759c
ggml : fix compile warnings ( #0 )
...
ggml-ci
2024-11-17 08:30:29 +02:00
Johannes Gäßler
8a43e940ab
ggml: new optimization interface (ggml/988)
2024-11-17 08:30:29 +02:00
Georgi Gerganov
5c9a8b22b1
scripts : update sync
2024-11-17 08:30:29 +02:00
Concedo
e7897f3257
update docs
2024-11-17 11:43:49 +08:00
FirstTimeEZ
0fff7fd798
docs : vulkan build instructions to use git bash mingw64 ( #10303 )
2024-11-17 00:29:18 +01:00
Johannes Gäßler
4e54be0ec6
llama/ex: remove --logdir argument ( #10339 )
2024-11-16 23:00:41 +01:00
Concedo
d6932bbff8
test fix linux build
2024-11-17 02:43:42 +08:00
Concedo
e1f0b0bedd
try fix macos build (+1 squashed commits)
...
Squashed commits:
[ae66dddfd] try fix macos build
2024-11-17 02:37:08 +08:00
Georgi Gerganov
db4cfd5dbc
llamafile : fix include path ( #0 )
...
ggml-ci
2024-11-16 20:36:26 +02:00
Georgi Gerganov
8ee0d09ae6
make : auto-determine dependencies ( #0 )
2024-11-16 20:36:26 +02:00
Concedo
f6e9d11636
try with 2 parallel jobs
2024-11-17 01:46:41 +08:00
Concedo
952328fdc8
try fix cuda build
2024-11-17 01:41:52 +08:00
Concedo
9acfe96c77
fix cuda build
2024-11-16 21:58:22 +08:00
MaggotHATE
bcdb7a2386
server: (web UI) Add samplers sequence customization ( #10255 )
...
* Samplers sequence: simplified and input field.
* Removed unused function
* Modify and use `settings-modal-short-input`
* rename "name" --> "label"
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2024-11-16 14:26:54 +01:00
Concedo
a8694698fd
accept gguf text encoders for sd
2024-11-16 17:23:02 +08:00
Concedo
590553ef07
Merge branch 'upstream' into concedo_experimental
...
# Conflicts:
# .devops/llama-cli-intel.Dockerfile
# .devops/llama-server-intel.Dockerfile
# .github/workflows/build.yml
# CMakePresets.json
# Makefile
# docs/backend/SYCL.md
# docs/build.md
# ggml/CMakeLists.txt
# ggml/src/ggml-cpu/CMakeLists.txt
# scripts/compare-llama-bench.py
# scripts/sync-ggml-am.sh
# scripts/sync-ggml.last
2024-11-16 17:20:14 +08:00
Concedo
70aee82552
attempts a backflip, but does he stick the landing?
2024-11-16 17:05:45 +08:00
Georgi Gerganov
f245cc28d4
scripts : fix missing key in compare-llama-bench.py ( #10332 )
2024-11-16 10:32:50 +02:00
Jeff Bolz
772703c8ff
vulkan: Optimize some mat-vec mul quant shaders ( #10296 )
...
Compute two result elements per workgroup (for Q{4,5}_{0,1}). This reuses
the B loads across the rows and also reuses some addressing calculations.
This required manually partially unrolling the loop, since the compiler
is less willing to unroll outer loops.
Add bounds-checking on the last iteration of the loop. I think this was at
least partly broken before.
Optimize the Q4_K shader to vectorize most loads and reduce the number of
bit twiddling instructions.
2024-11-16 07:26:57 +01:00
Concedo
a5f8e596d3
unset sc if ff off
2024-11-16 10:52:33 +08:00
FirstTimeEZ
dd3a6ce9f8
vulkan : add cmake preset debug/release ( #10306 )
2024-11-16 02:59:33 +01:00
Dan Johansson
1e58ee1318
ggml : optimize Q4_0 into Q4_0_X_Y repack ( #10324 )
2024-11-16 01:53:37 +01:00
FirstTimeEZ
89e4caaaf0
llama : save number of parameters and the size in llama_model ( #10286 )
...
fixes #10285
2024-11-16 01:42:13 +01:00
Srihari-mcw
74d73dc85c
Make updates to fix issues with clang-cl builds while using AVX512 flags ( #10314 )
2024-11-15 22:27:00 +01:00
Johannes Gäßler
4047be74da
scripts: update compare-llama-bench.py ( #10319 )
2024-11-15 21:19:03 +01:00
slaren
883d206fbd
ggml : fix some build issues
2024-11-15 21:45:32 +02:00
Georgi Gerganov
09ecbcb596
cmake : fix ppc64 check (whisper/0)
...
ggml-ci
2024-11-15 15:44:06 +02:00
thewh1teagle
3225008973
ggml : vulkan logs (whisper/2547)
2024-11-15 15:44:06 +02:00
Georgi Gerganov
cbf5541a82
sync : ggml
2024-11-15 15:44:06 +02:00
Eve
18429220bd
AVX BF16 and single scale quant optimizations ( #10212 )
...
* use 128 bit loads (i've tried 256->128 to death and its slower)
* double accumulator
* avx bf16 vec dot
* +3% q4_0 inference
* +7% tg +5% pp compared to master
* slower f16c version, kep for reference
* 256b version, also slow. i tried :)
* revert f16
* faster with madd
* split to functions
* Q8_0 and IQ4_NL, 5-7% faster
* fix potential overflow (performance reduced)
* 16 bit add for q4_0 only
* merge
2024-11-15 12:47:58 +01:00
R0CKSTAR
f0204a0ec7
ci: build test musa with cmake ( #10298 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2024-11-15 12:47:25 +01:00