koboldcpp/tools/server
Concedo 21e31e255b Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.github/workflows/build.yml
#	.github/workflows/docker.yml
#	README.md
#	build-xcframework.sh
#	common/CMakeLists.txt
#	examples/CMakeLists.txt
#	ggml/src/ggml-cpu/CMakeLists.txt
#	ggml/src/ggml-cuda/CMakeLists.txt
#	ggml/src/ggml-metal/ggml-metal.m
#	ggml/src/ggml-metal/ggml-metal.metal
#	ggml/src/ggml-sycl/CMakeLists.txt
#	ggml/src/ggml-sycl/backend.hpp
#	ggml/src/ggml-sycl/common.hpp
#	ggml/src/ggml-sycl/ggml-sycl.cpp
#	ggml/src/ggml-sycl/mmvq.cpp
#	ggml/src/ggml-sycl/vecdotq.hpp
#	scripts/compare-llama-bench.py
#	src/CMakeLists.txt
#	src/llama-model.cpp
#	src/llama.cpp
#	tests/test-backend-ops.cpp
#	tests/test-opt.cpp
#	tools/llama-bench/README.md
#	tools/llama-bench/llama-bench.cpp
#	tools/mtmd/CMakeLists.txt
#	tools/mtmd/README.md
#	tools/mtmd/clip.cpp
#	tools/rpc/rpc-server.cpp
#	tools/server/CMakeLists.txt
#	tools/server/README.md
2025-05-13 00:28:35 +08:00
..
bench Merge branch 'upstream' into concedo_experimental 2025-05-03 12:15:36 +08:00
public server : (webui) rename has_multimodal --> modalities (#13393) 2025-05-09 09:06:37 +02:00
public_legacy llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
public_simplechat Merge branch 'upstream' into concedo_experimental 2025-05-03 12:15:36 +08:00
tests Merge branch 'upstream' into concedo_experimental 2025-05-13 00:28:35 +08:00
themes Merge branch 'upstream' into concedo_experimental 2025-05-03 12:15:36 +08:00
webui server : (webui) rename has_multimodal --> modalities (#13393) 2025-05-09 09:06:37 +02:00
chat-llama2.sh llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
chat.mjs llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
chat.sh llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
httplib.h llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
server.cpp tools : fix uninitialized llama_batch in server (#13436) 2025-05-11 17:08:26 +02:00
utils.hpp server : allow content to be null in oaicompat_completion_params_parse (#13477) 2025-05-12 13:56:42 +02:00