mirror of
https://github.com/LostRuins/koboldcpp.git
synced 2025-09-11 09:34:37 +00:00
# Conflicts: # .devops/full-cuda.Dockerfile # .devops/full-rocm.Dockerfile # .devops/full.Dockerfile # .devops/llama-cpp-clblast.srpm.spec # .devops/llama-cpp-cuda.srpm.spec # .devops/llama-cpp.srpm.spec # .devops/nix/package.nix # .devops/server-cuda.Dockerfile # .devops/server-intel.Dockerfile # .devops/server-rocm.Dockerfile # .devops/server-vulkan.Dockerfile # .devops/server.Dockerfile # .github/workflows/build.yml # .github/workflows/code-coverage.yml # .github/workflows/docker.yml # .github/workflows/editorconfig.yml # .github/workflows/gguf-publish.yml # .github/workflows/nix-ci-aarch64.yml # .github/workflows/nix-ci.yml # .github/workflows/python-check-requirements.yml # .github/workflows/python-lint.yml # .github/workflows/server.yml # .github/workflows/zig-build.yml # CMakeLists.txt # Makefile # README-sycl.md # README.md # ci/run.sh # examples/gguf-split/gguf-split.cpp # flake.lock # flake.nix # llama.cpp # scripts/compare-llama-bench.py # scripts/sync-ggml-am.sh # scripts/sync-ggml.last # scripts/sync-ggml.sh # tests/CMakeLists.txt # tests/test-backend-ops.cpp # tests/test-chat-template.cpp |
||
---|---|---|
.. | ||
CMakeLists.txt | ||
gguf-split.cpp | ||
README.md |
GGUF split Example
CLI to split / merge GGUF files.
Command line options:
--split
: split GGUF to multiple GGUF, default operation.--split-max-tensors
: maximum tensors in each split: default(128)--merge
: merge multiple GGUF to a single GGUF.