koboldcpp/examples/server
Concedo ce7f9c9a2c Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.devops/full-rocm.Dockerfile
#	.devops/llama-cli-rocm.Dockerfile
#	.devops/llama-server-rocm.Dockerfile
#	.github/workflows/build.yml
#	.github/workflows/python-type-check.yml
#	CMakeLists.txt
#	CONTRIBUTING.md
#	README.md
#	ci/run.sh
#	examples/embedding/embedding.cpp
#	examples/server/README.md
#	flake.lock
#	ggml/include/ggml.h
#	ggml/src/ggml.c
#	requirements/requirements-convert_legacy_llama.txt
#	scripts/sync-ggml.last
#	src/llama-vocab.cpp
#	src/llama.cpp
#	tests/test-backend-ops.cpp
#	tests/test-grad0.cpp
#	tests/test-tokenizer-0.cpp
2024-10-02 01:00:57 +08:00
..
bench Merge branch 'upstream' into concedo_experimental 2024-09-19 14:53:57 +08:00
public server : add loading html page while model is loading (#9468) 2024-09-13 14:23:11 +02:00
public_simplechat Merge commit 'df270ef745' into concedo_experimental 2024-09-09 17:10:08 +08:00
tests Merge branch 'upstream' into concedo_experimental 2024-10-02 01:00:57 +08:00
themes Merge commit 'df270ef745' into concedo_experimental 2024-09-09 17:10:08 +08:00
chat-llama2.sh chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
chat.mjs json-schema-to-grammar improvements (+ added to server) (#5978) 2024-03-21 11:50:43 +00:00
chat.sh server : fix context shift (#5195) 2024-01-30 20:17:30 +02:00
deps.sh build: generate hex dump of server assets during build (#6661) 2024-04-21 18:48:53 +01:00
httplib.h Server: version bump for httplib and json (#6169) 2024-03-20 13:30:36 +01:00
server.cpp Merge branch 'upstream' into concedo_experimental 2024-10-02 01:00:57 +08:00
utils.hpp llama : add reranking support (#9510) 2024-09-28 17:42:03 +03:00