mirror of
https://github.com/LostRuins/koboldcpp.git
synced 2026-05-08 09:59:50 +00:00
# Conflicts: # .devops/cloud-v-pipeline # .devops/llama-cli-cuda.Dockerfile # .devops/llama-cli-rocm.Dockerfile # .devops/llama-cli-vulkan.Dockerfile # .devops/llama-cli.Dockerfile # .devops/llama-cpp-clblast.srpm.spec # .devops/llama-cpp-cuda.srpm.spec # .devops/llama-cpp.srpm.spec # .devops/llama-server-cuda.Dockerfile # .devops/llama-server-rocm.Dockerfile # .devops/llama-server-vulkan.Dockerfile # .devops/llama-server.Dockerfile # .devops/nix/apps.nix # .devops/nix/package.nix # .devops/tools.sh # .dockerignore # .github/ISSUE_TEMPLATE/01-bug-low.yml # .github/ISSUE_TEMPLATE/02-bug-medium.yml # .github/ISSUE_TEMPLATE/03-bug-high.yml # .github/ISSUE_TEMPLATE/04-bug-critical.yml # .github/workflows/bench.yml # .github/workflows/build.yml # .github/workflows/docker.yml # .github/workflows/server.yml # .gitignore # Makefile # README-sycl.md # README.md # ci/run.sh # docs/token_generation_performance_tips.md # flake.nix # grammars/README.md # pocs/vdot/CMakeLists.txt # scripts/get-hellaswag.sh # scripts/get-wikitext-103.sh # scripts/get-wikitext-2.sh # scripts/get-winogrande.sh # scripts/hf.sh # scripts/pod-llama.sh # scripts/qnt-all.sh # scripts/run-all-ppl.sh # scripts/run-with-preset.py # scripts/server-llm.sh # tests/test-backend-ops.cpp |
||
|---|---|---|
| .. | ||
| CMakeLists.txt | ||
| quantize.cpp | ||
| README.md | ||
| tests.sh | ||
quantize
You can also use the GGUF-my-repo space on Hugging Face to build your own quants without any setup.
Note: It is synced from llama.cpp main every 6 hours.
Llama 2 7B
| Quantization | Bits per Weight (BPW) |
|---|---|
| Q2_K | 3.35 |
| Q3_K_S | 3.50 |
| Q3_K_M | 3.91 |
| Q3_K_L | 4.27 |
| Q4_K_S | 4.58 |
| Q4_K_M | 4.84 |
| Q5_K_S | 5.52 |
| Q5_K_M | 5.68 |
| Q6_K | 6.56 |
Llama 2 13B
| Quantization | Bits per Weight (BPW) |
|---|---|
| Q2_K | 3.34 |
| Q3_K_S | 3.48 |
| Q3_K_M | 3.89 |
| Q3_K_L | 4.26 |
| Q4_K_S | 4.56 |
| Q4_K_M | 4.83 |
| Q5_K_S | 5.51 |
| Q5_K_M | 5.67 |
| Q6_K | 6.56 |
Llama 2 70B
| Quantization | Bits per Weight (BPW) |
|---|---|
| Q2_K | 3.40 |
| Q3_K_S | 3.47 |
| Q3_K_M | 3.85 |
| Q3_K_L | 4.19 |
| Q4_K_S | 4.53 |
| Q4_K_M | 4.80 |
| Q5_K_S | 5.50 |
| Q5_K_M | 5.65 |
| Q6_K | 6.56 |