koboldcpp/examples/server
Concedo 6d7ef10671 Merge branch 'upstream' into concedo_experimental
Renable qwen2vl GPU for vulkan https://github.com/ggml-org/llama.cpp/pull/11902

# Conflicts:
#	.github/workflows/build.yml
#	.github/workflows/docker.yml
#	.gitignore
#	CONTRIBUTING.md
#	Makefile
#	common/CMakeLists.txt
#	common/arg.cpp
#	common/common.cpp
#	examples/main/main.cpp
#	examples/run/run.cpp
#	examples/server/tests/README.md
#	ggml/src/ggml-cuda/mma.cuh
#	scripts/get_chat_template.py
#	tests/test-backend-ops.cpp
#	tests/test-chat-template.cpp
#	tests/test-chat.cpp
2025-02-20 23:17:20 +08:00
..
bench Merge branch 'upstream' into concedo_experimental 2025-01-03 11:56:20 +08:00
public server : (webui) Enable communication with parent html (if webui is in iframe) (#11940) 2025-02-18 23:01:44 +01:00
public_legacy sampling : refactor + optimize penalties sampler (#10803) 2024-12-16 12:31:14 +02:00
public_simplechat rewritten checkpoint 1 - before coopmat 2024-12-13 16:55:23 +08:00
tests Merge branch 'upstream' into concedo_experimental 2025-02-20 23:17:20 +08:00
themes Merge branch 'upstream' into concedo_experimental 2024-12-19 11:57:43 +08:00
webui server : (webui) Enable communication with parent html (if webui is in iframe) (#11940) 2025-02-18 23:01:44 +01:00
chat-llama2.sh chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
chat.mjs server : revamp chat UI with vuejs and daisyui (#10175) 2024-11-07 17:31:10 -04:00
chat.sh server : fix context shift (#5195) 2024-01-30 20:17:30 +02:00
httplib.h server : bump httplib to 0.19.0 (#11908) 2025-02-16 17:11:22 +00:00
server.cpp speculative : update default params (#11954) 2025-02-19 13:29:42 +02:00
utils.hpp tool-call: refactor common chat / tool-call api (+ tests / fixes) (#11900) 2025-02-18 18:03:23 +00:00