koboldcpp/examples/gguf-split
Concedo 6c000cbe7a Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.flake8
#	.github/workflows/bench.yml
#	.github/workflows/python-lint.yml
#	.pre-commit-config.yaml
#	Makefile
#	README.md
#	models/ggml-vocab-bert-bge.gguf.inp
#	models/ggml-vocab-bert-bge.gguf.out
#	models/ggml-vocab-deepseek-coder.gguf.inp
#	models/ggml-vocab-deepseek-coder.gguf.out
#	models/ggml-vocab-deepseek-llm.gguf.inp
#	models/ggml-vocab-deepseek-llm.gguf.out
#	models/ggml-vocab-falcon.gguf.inp
#	models/ggml-vocab-falcon.gguf.out
#	models/ggml-vocab-gpt-2.gguf.inp
#	models/ggml-vocab-gpt-2.gguf.out
#	models/ggml-vocab-llama-bpe.gguf.inp
#	models/ggml-vocab-llama-bpe.gguf.out
#	models/ggml-vocab-llama-spm.gguf.inp
#	models/ggml-vocab-llama-spm.gguf.out
#	models/ggml-vocab-mpt.gguf.inp
#	models/ggml-vocab-mpt.gguf.out
#	models/ggml-vocab-phi-3.gguf
#	models/ggml-vocab-phi-3.gguf.inp
#	models/ggml-vocab-phi-3.gguf.out
#	models/ggml-vocab-refact.gguf
#	models/ggml-vocab-starcoder.gguf.inp
#	models/ggml-vocab-starcoder.gguf.out
#	requirements/requirements-convert.txt
#	scripts/compare-llama-bench.py
#	scripts/run-with-preset.py
#	scripts/verify-checksum-models.py
#	tests/CMakeLists.txt
#	tests/test-tokenizer-0.cpp
2024-05-06 18:09:45 +08:00
..
CMakeLists.txt gguf-split: split and merge gguf per batch of tensors (#6135) 2024-03-19 12:05:44 +01:00
gguf-split.cpp Merge branch 'upstream' into concedo_experimental 2024-05-06 18:09:45 +08:00
README.md Fix --split-max-size (#6655) 2024-04-14 13:12:59 +02:00
tests.sh gguf-split: add --no-tensor-first-split (#7072) 2024-05-04 18:56:22 +02:00

GGUF split Example

CLI to split / merge GGUF files.

Command line options:

  • --split: split GGUF to multiple GGUF, default operation.
  • --split-max-size: max size per split in M or G, f.ex. 500M or 2G.
  • --split-max-tensors: maximum tensors in each split: default(128)
  • --merge: merge multiple GGUF to a single GGUF.