koboldcpp/tools/server
Concedo 4356a00f4a Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.github/workflows/build.yml
#	ci/run.sh
#	docs/function-calling.md
#	examples/gritlm/gritlm.cpp
#	ggml/CMakeLists.txt
#	ggml/cmake/common.cmake
#	ggml/src/CMakeLists.txt
#	ggml/src/ggml-cpu/CMakeLists.txt
#	ggml/src/ggml-cpu/ggml-cpu.c
#	ggml/src/ggml-hip/CMakeLists.txt
#	ggml/src/ggml-vulkan/CMakeLists.txt
#	ggml/src/ggml-vulkan/vulkan-shaders/CMakeLists.txt
#	requirements/requirements-compare-llama-bench.txt
#	scripts/compare-llama-bench.py
#	tests/CMakeLists.txt
2025-06-18 00:16:54 +08:00
..
bench Merge branch 'upstream' into concedo_experimental 2025-05-03 12:15:36 +08:00
public webui: Wrap long numbers instead of infinite horizontal scroll (#14062) 2025-06-11 16:42:25 +02:00
public_legacy llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
public_simplechat Merge branch 'upstream' into concedo_experimental 2025-05-03 12:15:36 +08:00
tests Merge branch 'upstream' into concedo_experimental 2025-06-05 11:03:34 +08:00
themes Merge branch 'upstream' into concedo_experimental 2025-05-03 12:15:36 +08:00
webui webui: Wrap long numbers instead of infinite horizontal scroll (#14062) 2025-06-11 16:42:25 +02:00
chat-llama2.sh llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
chat.mjs llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
chat.sh llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
server.cpp server : fix incorrect usage of llama_get_embeddings() (#14225) 2025-06-16 22:33:27 +03:00
utils.hpp sync : vendor (#13901) 2025-05-30 16:25:45 +03:00