koboldcpp/tools/server
Concedo b1c500ae2b Merge commit '2948e6049a' into concedo_experimental
# Conflicts:
#	.github/workflows/build.yml
#	CONTRIBUTING.md
#	docs/backend/VirtGPU/development.md
#	docs/ops.md
#	docs/ops/WebGPU.csv
#	embd_res/templates/GigaChat3-10B-A1.8B.jinja
#	embd_res/templates/GigaChat3.1-10B-A1.8B.jinja
#	ggml/src/ggml-hip/CMakeLists.txt
#	ggml/src/ggml-opencl/CMakeLists.txt
#	ggml/src/ggml-opencl/ggml-opencl.cpp
#	ggml/src/ggml-webgpu/ggml-webgpu-shader-lib.hpp
#	ggml/src/ggml-webgpu/ggml-webgpu.cpp
#	scripts/sync_vendor.py
#	tests/CMakeLists.txt
#	tests/test-backend-ops.cpp
#	tests/test-chat.cpp
#	tests/test-grammar-integration.cpp
#	tests/test-quantize-fns.cpp
2026-03-15 11:21:24 +08:00
..
bench Merge branch 'upstream' into concedo_experimental 2025-08-23 11:35:28 +08:00
public New conversations now auto-select the first loaded model (#20403) 2026-03-12 09:07:05 +01:00
public_legacy Autoparser - complete refactoring of parser architecture (#18675) 2026-03-06 21:01:00 +01:00
public_simplechat Merge commit '2cd20b72ed' into concedo_experimental 2026-03-10 22:11:08 +08:00
tests Merge commit '2948e6049a' into concedo_experimental 2026-03-15 11:21:24 +08:00
themes Merge branch 'upstream' into concedo_experimental 2026-02-03 19:00:42 +08:00
webui Merge commit '2948e6049a' into concedo_experimental 2026-03-15 11:21:24 +08:00
chat-llama2.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
chat.mjs llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
chat.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
README-dev.md server: add auto-sleep after N seconds of idle (#18228) 2025-12-21 02:24:42 +01:00
server-common.cpp common/parser: handle reasoning budget (#20297) 2026-03-11 10:26:12 +01:00
server-common.h common/parser: handle reasoning budget (#20297) 2026-03-11 10:26:12 +01:00
server-context.cpp common/parser: handle reasoning budget (#20297) 2026-03-11 10:26:12 +01:00
server-context.h server: Add pragma once to server-context.h (#19944) 2026-02-27 18:28:36 +01:00
server-cors-proxy.h server: Parse port numbers from MCP server URLs in CORS proxy (#20208) 2026-03-09 17:47:54 +01:00
server-http.cpp server: fix query params lost when proxying requests in multi-model router mode (#19854) 2026-02-24 21:46:06 +01:00
server-http.h server: fix query params lost when proxying requests in multi-model router mode (#19854) 2026-02-24 21:46:06 +01:00
server-models.cpp server: Parse port numbers from MCP server URLs in CORS proxy (#20208) 2026-03-09 17:47:54 +01:00
server-models.h server: Parse port numbers from MCP server URLs in CORS proxy (#20208) 2026-03-09 17:47:54 +01:00
server-queue.cpp server: improve slots scheduling for n_cmpl (#18789) 2026-01-15 17:10:28 +01:00
server-queue.h server: improve slots scheduling for n_cmpl (#18789) 2026-01-15 17:10:28 +01:00
server-task.cpp common/parser: handle reasoning budget (#20297) 2026-03-11 10:26:12 +01:00
server-task.h Autoparser - complete refactoring of parser architecture (#18675) 2026-03-06 21:01:00 +01:00
server.cpp webui: Agentic Loop + MCP Client with support for Tools, Resources and Prompts (#18655) 2026-03-06 10:00:39 +01:00