koboldcpp/examples/server
Concedo da6cf261a8 Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.github/workflows/build.yml
#	.github/workflows/close-issue.yml
#	.github/workflows/nix-ci-aarch64.yml
#	.github/workflows/nix-ci.yml
#	README.md
#	ci/run.sh
#	examples/server/README.md
#	ggml/src/ggml-cuda.cu
#	ggml/src/ggml-metal.m
#	scripts/sync-ggml.last
#	tests/test-backend-ops.cpp
2024-10-05 22:24:08 +08:00
..
bench Merge branch 'upstream' into concedo_experimental 2024-09-19 14:53:57 +08:00
public server : add loading html page while model is loading (#9468) 2024-09-13 14:23:11 +02:00
public_simplechat Merge commit 'df270ef745' into concedo_experimental 2024-09-09 17:10:08 +08:00
tests Merge branch 'upstream' into concedo_experimental 2024-10-02 01:00:57 +08:00
themes Merge commit 'df270ef745' into concedo_experimental 2024-09-09 17:10:08 +08:00
chat-llama2.sh chmod : make scripts executable (#2675) 2023-08-23 17:29:09 +03:00
chat.mjs json-schema-to-grammar improvements (+ added to server) (#5978) 2024-03-21 11:50:43 +00:00
chat.sh server : fix context shift (#5195) 2024-01-30 20:17:30 +02:00
deps.sh build: generate hex dump of server assets during build (#6661) 2024-04-21 18:48:53 +01:00
httplib.h Server: version bump for httplib and json (#6169) 2024-03-20 13:30:36 +01:00
server.cpp Merge branch 'upstream' into concedo_experimental 2024-10-05 22:24:08 +08:00
utils.hpp llama : add reranking support (#9510) 2024-09-28 17:42:03 +03:00