koboldcpp/tools/server/tests
Concedo 17c0c8d55d Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	README.md
#	docs/backend/zDNN.md
#	docs/build.md
#	docs/ops.md
#	ggml/CMakeLists.txt
#	ggml/src/CMakeLists.txt
#	ggml/src/ggml-cann/ggml-cann.cpp
#	ggml/src/ggml-opencl/ggml-opencl.cpp
#	ggml/src/ggml-rpc/ggml-rpc.cpp
#	ggml/src/ggml-sycl/convert.cpp
#	ggml/src/ggml-sycl/ggml-sycl.cpp
#	src/llama-quant.cpp
#	tests/test-backend-ops.cpp
#	tools/llama-bench/llama-bench.cpp
#	tools/server/README.md
2025-12-07 16:48:38 +08:00
..
unit server: support multiple generations from one prompt (OAI "n" option) (#17775) 2025-12-06 15:54:38 +01:00
.gitignore llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
conftest.py server : add Anthropic Messages API support (#17570) 2025-11-28 12:57:04 +01:00
pytest.ini llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
requirements.txt requirements : update transformers/torch for Embedding Gemma (#15828) 2025-09-09 06:06:52 +02:00
tests.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
utils.py server: add router multi-model tests (#17704) (#17722) 2025-12-03 15:10:37 +01:00