koboldcpp/tools/server
Concedo 9eb9e4eb83 Merge commit '8a70973557' into concedo_experimental
# Conflicts:
#	docs/backend/CANN.md
#	docs/backend/SYCL.md
#	examples/model-conversion/scripts/utils/tensor-info.py
#	ggml/src/ggml-opencl/ggml-opencl.cpp
#	ggml/src/ggml-opencl/kernels/expm1.cl
#	ggml/src/ggml-opencl/kernels/mean.cl
#	ggml/src/ggml-opencl/kernels/softplus.cl
#	ggml/src/ggml-opencl/kernels/sum_rows.cl
#	ggml/src/ggml-webgpu/ggml-webgpu-shader-lib.hpp
#	ggml/src/ggml-webgpu/ggml-webgpu.cpp
#	ggml/src/ggml-webgpu/wgsl-shaders/common_decls.tmpl
#	ggml/src/ggml-webgpu/wgsl-shaders/embed_wgsl.py
#	ggml/src/ggml-webgpu/wgsl-shaders/get_rows.wgsl
#	ggml/src/ggml-webgpu/wgsl-shaders/mul_mat.wgsl
#	ggml/src/ggml-webgpu/wgsl-shaders/mul_mat_decls.tmpl
#	ggml/src/ggml-webgpu/wgsl-shaders/mul_mat_reg_tile.wgsl
#	ggml/src/ggml-webgpu/wgsl-shaders/mul_mat_subgroup_matrix.wgsl
#	ggml/src/ggml-webgpu/wgsl-shaders/mul_mat_vec.wgsl
#	ggml/src/ggml-webgpu/wgsl-shaders/scale.wgsl
#	tools/server/webui/src/lib/components/app/chat/ChatScreen/ChatScreen.svelte
2026-02-20 14:36:49 +08:00
..
bench Merge branch 'upstream' into concedo_experimental 2025-08-23 11:35:28 +08:00
public Pre-MCP UI and architecture cleanup (#19689) 2026-02-18 12:02:02 +01:00
public_legacy docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
public_simplechat Merge branch 'upstream' into concedo_experimental 2025-05-03 12:15:36 +08:00
tests Merge commit '88d23ad515' into concedo_experimental 2026-01-29 22:25:56 +08:00
themes Merge branch 'upstream' into concedo_experimental 2026-02-03 19:00:42 +08:00
webui Merge commit '8a70973557' into concedo_experimental 2026-02-20 14:36:49 +08:00
chat-llama2.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
chat.mjs llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
chat.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
README-dev.md server: add auto-sleep after N seconds of idle (#18228) 2025-12-21 02:24:42 +01:00
server-common.cpp server: /v1/responses (partial) (#18486) 2026-01-21 17:47:23 +01:00
server-common.h server: /v1/responses (partial) (#18486) 2026-01-21 17:47:23 +01:00
server-context.cpp server: save generated text for the /slots endpoint (for LLAMA_SERVER_SLOTS_DEBUG=1) (#19622) 2026-02-18 18:53:37 +01:00
server-context.h server : support preserving reasoning_content in assistant message (#18994) 2026-01-22 21:30:06 +01:00
server-http.cpp server: do not log certain endpoints (avoid log spam) (#19028) 2026-01-22 19:24:37 +01:00
server-http.h server: split HTTP into its own interface (#17216) 2025-11-17 22:05:44 +01:00
server-models.cpp server: print actual model name in 'model not found" error (#19117) 2026-02-02 16:55:27 +01:00
server-models.h server : fix router child env in containerized environments (#18562) 2026-01-05 14:12:05 +01:00
server-queue.cpp server: improve slots scheduling for n_cmpl (#18789) 2026-01-15 17:10:28 +01:00
server-queue.h server: improve slots scheduling for n_cmpl (#18789) 2026-01-15 17:10:28 +01:00
server-task.cpp spec : remove check rate (#19377) 2026-02-09 15:30:50 +02:00
server-task.h server : wrap around the "id_slot" parameter (#19207) 2026-01-30 19:46:10 +02:00
server.cpp server: /v1/responses (partial) (#18486) 2026-01-21 17:47:23 +01:00