koboldcpp/README.md
Zijun Yu 9789c4ecdc
ggml : add OpenVINO backend (#15307)
* Update build doc

* Add cgraph tensor output name to OV op name

* Update openvino build instructions

* Add initial NPU support

* draft NPU support version 2: prefill + kvcache

* NPU support version 2: prefill + kvcache

* Change due to ggml cgraph changes, not correct yet

* Change due to ggml cgraph changes, llama-3.2 CPU work

* Add AMD64 to CMakeLists

* Change due to ggml cgraph changes, all device work

* Refactor: clean, fix warning

* Update clang-format

* Statful transformation for CPU GPU

* Add SwiGLU

* Fuse to SDPA

* Replace Concat with Broadcast in MulMat for GQA

* Pull out indices creation for kv cache update

* Refactor: remove past_token_len from extra_inputs

* Fix Phi3 SwiGLU and SoftMax

* Pull out sin cos from rope

* Reduce memory: free ov weights node after graph conversion

* Fix CPY due to cgraph change

* Added OpenVINO CI/CD. Updated docs

* Fix llama-cli

* Fix Phi3 ROPE; Add test-backend-ops

* Fix NPU

* Fix llama-bench; Clang-format

* Fix llama-perplexity

* temp. changes for mark decomp

* matmul in fp32

* mulmat input conversion fix

* mulmat type conversion update

* add mark decomp pass

* Revert changes in fuse_to_sdpa

* Update build.md

* Fix test-backend-ops

* Skip test-thread-safety; Run ctest only in ci/run.sh

* Use CiD for NPU

* Optimize tensor conversion, improve TTFT

* Support op SET_ROWS

* Fix NPU

* Remove CPY

* Fix test-backend-ops

* Minor updates for raising PR

* Perf: RMS fused to OV internal RMS op

* Fix after rebasing

- Layout of cache k and cache v are unified: [seq, n_head, head_size]
- Add CPY and FLASH_ATTN_EXT, flash attn is not used yet
- Skip test-backend-ops due to flash attn test crash
- Add mutex around graph conversion to avoid test-thread-safety fali in the future
- Update NPU config
- Update GPU config to disable SDPA opt to make phi-3 run

* Change openvino device_type to GPU; Enable flash_attn

* Update supports_buft and supports_op for quantized models

* Add quant weight conversion functions from genai gguf reader

* Quant models run with accuracy issue

* Fix accuracy: disable cpu_repack

* Fix CI; Disable test-backend-ops

* Fix Q4_1

* Fix test-backend-ops: Treat quantized tensors as weights

* Add NPU Q4_0 support

* NPU perf: eliminate zp

* Dequantize q4_1 q4_k q6_k for NPU

* Add custom quant type: q8_1_c, q4_0_128

* Set m_is_static=false as default in decoder

* Simpilfy translation of get_rows

* Fix after rebasing

* Improve debug util; Eliminate nop ReshapeReshape

* STYLE: make get_types_to_requant a function

* Support BF16 model

* Fix NPU compile

* WA for npu 1st token acc issue

* Apply EliminateZP only for npu

* Add GeGLU

* Fix Hunyuan

* Support iSWA

* Fix NPU accuracy

* Fix ROPE accuracy when freq_scale != 1

* Minor: not add attention_size_swa for non-swa model

* Minor refactor

* Add Q5_K to support phi-3-q4_k_m

* Requantize Q6_K (gs16) to gs32 on GPU

* Fix after rebasing

* Always apply Eliminate_ZP to fix GPU compile issue on some platforms

* kvcachefusion support

* env variable GGML_OPENVINO_DISABLE_SDPA_OPTIMIZATION added

* Fix for Phi3

* Fix llama-cli (need to run with --no-warmup)

* Fix add_sliced_mask; Revert mulmat, softmax; Remove input attention_size, iSWA model not working

* fix after rebasing

* Fix llama-3-8b and phi3-mini q4_0 NPU

* Update to OV-2025.3 and CMakeLists.txt

* Add OV CI cache

* Apply CISC review and update CI to OV2025.3

* Update CI to run OV dep install before build

* Update OV dockerfile to use OV2025.3 and update build docs

* Style: use switch in supports_ops

* Style: middle ptr and ref align, omit optional struct keyword

* NPU Unify PD (#14)

* Stateless. Fix llama-cli llama-server

* Simplify broadcast op in attention

* Replace get_output_tensor+memcpy with set_output_tensor

* NPU unify PD. Unify dynamic and static dims

* Clean placeholders in ggml-openvino.cpp

* NPU unify PD (handled internally)

* change graph to 4d, support multi sequences

* Fix llama-bench

* Fix NPU

* Update ggml-decoder.cpp

Hitting error while compiling on windows:

error C3861: 'unsetenv': identifier not found

Reason: unsetenv() is a POSIX function; it doesn’t exist on Windows. Visual Studio (MSVC) won’t recognize it.

Proposed fix: Use _putenv_s() (Windows equivalent)
This is supported by MSVC and achieves the same effect: it removes the environment variable from the process environment.

This keeps cross-platform compatibility.

* Update ggml-decoder.cpp

* Update ggml-decoder.cpp

* Update ggml-decoder.cpp

* Update ggml-decoder.cpp

* Update ggml-decoder.cpp

* Remove the second decoder for node. Moving the function into the model decoder

* Fix error for naive

* NPU prefill chunking

* NPU fix llama-bench

* fallback naive run with accuracy issue

* NPU support llma-perplexity -b 512 --no-warmup

* Refactor: split ov_graph_compute for dynamic and static

* remove unused API GgmlOvDecoder::get_output_stride(const std::string & name)

* minor update due to ov 2025.4

* remove unused API GgmlOvDecoder::get_output_names()

* remove unused API get_output_shape(const std::string & name)

* Modified API GgmlOvDecoder::get_output_type(const std::string & name)

* Removed API GgmlOvDecoder::get_output_op_params(const std::string & name)

* Removed API get_output_ggml_tensor(const std::string & name)

* Removed API m_outputs

* Removed m_output_names

* Removed API GgmlOvDecoder::get_input_names()

* Removed API GgmlOvDecoder::get_input_stride(const std::string& name)

* Removed API get_input_type

* Removed API get_input_type

* Removed API GgmlOvDecoder::get_input_shape(const std::string & name)

* Removed API GgmlOvDecoder::get_input_op_params(const std::string & name)

* Fix error for decoder cache

* Reuse cached decoder

* GPU remove Q6_K requantization

* NPU fix wrong model output shape

* NPU fix q4 perf regression

* Remove unused variable nodes

* Fix decoder can_reuse for llama-bench

* Update build.md for Windows

* backend buffer: allocate on host

* Use shared_buffer for GPU NPU; Refactor

* Add ov_backend_host_buffer; Use cached remote context

* Put kvcache on GPU

* Use ggml_aligned_malloc

* only use remote tensor for kvcache

* only use remote tensor for kvcache for GPU

* FIX: use remote tensor from singleton

* Update build.md to include OpenCL

* NPU always requant to q4_0_128

* Optimize symmetric quant weight extraction: use single zp

* Use Q8_0_C in token embd, lm_head, and for 5 and 6 bits quant

* Update build.md

* Support -ctk f32

* Initial stateful graph support

* Update ggml/src/ggml-openvino/ggml-decoder.cpp

Co-authored-by: Yamini Nimmagadda <yamini.nimmagadda@intel.com>

* code cleanup

* npu perf fix

* requant to f16 for Q6 embed on NPU

* Update ggml/src/ggml-openvino/ggml-decoder.cpp

* Update ggml/src/ggml-openvino/ggml-openvino-extra.cpp

* Create OPENVINO.md in llama.cpp backend docs

* Update OPENVINO.md

* Update OPENVINO.md

* Update OPENVINO.md

* Update build.md

* Update OPENVINO.md

* Update OPENVINO.md

* Update OPENVINO.md

* kq_mask naming fix

* Syntax correction for workflows build file

* Change ov backend buffer is_host to false

* Fix llama-bench -p -n where p<=256

* Fix --direct-io 0

* Don't put kvcache on GPU in stateful mode

* Remove hardcode names

* Fix stateful shapes

* Simplification for stateful and update output shape processing

* Remove hardcode names

* Avoid re-compilation in llama-bench

* Extract zp directly instead of bias

* Refactor weight tensor processing

* create_weight_node accept non-ov backend buffer

* remove changes in llama-graph.cpp

* stateful masking fix (#38)

Fix for stateful accuracy issues and cl_out_of_resources error in stateful GPU with larger context sizes.

* Fix test-backend-ops crash glu, get_rows, scale, rms_norm, add

* hardcoded name handling for rope_freqs.weight

* Suppress logging and add error handling to allow test-backend-ops to complete

* Fix MUL_MAT with broadcast; Add unsupported MUL_MAT FLASH_ATTN cases

* Use bias instead of zp in test-backend-ops

* Update OV in CI, Add OV CI Tests in GH Actions

* Temp fix for multithreading bug

* Update OV CI, fix review suggestions.

* fix editorconfig-checker, update docs

* Fix tabs to spaces for editorconfig-checker

* fix editorconfig-checker

* Update docs

* updated model link to be GGUF model links

* Remove GGML_CPU_REPACK=OFF

* Skip permuted ADD and MUL

* Removed static variables from utils.cpp

* Removed initializing non-existing variable

* Remove unused structs

* Fix test-backend-ops for OV GPU

* unify api calling

* Update utils.cpp

* When the dim is dynamic, throw an error, need to is stastic forst

* Add interface compute_model_outputs(), which get the model output through computing the node use count & status in the cgraph to avoid the flag using

* No need to return

* Fix test-backend-ops for OV GPU LNL

* Fix test-thread-safety

* use the shape from infer request of output tensor create to avoid issue

* fix dynamic output shape  issue

* fix issue for the unused node in tests

* Remove unused lock

* Add comment

* Update openvino docs

* update to OV release version 2026.0

* add ci ov-gpu self hosted runner

* fix editorconfig

* Fix perplexity

* Rewrite the model inputs finding mechanism  (#54)

* Rewrite the model inputs finding logistic

* Put stateful shape handle in get input shape

* Put the iteration logistic in func

* Added ggml-ci-intel-openvino-gpu and doc update

* .hpp files converted to .h

* fix ggml-ci-x64-intel-openvino-gpu

* Fix for stateful execution bug in llama-bench

* Minor updates after stateful llama-bench fix

* Update ggml/src/ggml-openvino/utils.cpp

Co-authored-by: Yamini Nimmagadda <yamini.nimmagadda@intel.com>

* Remove multiple get_shape calls

* Bring back mutex into compute

* Fix VIEW op, which slice the input node

* Added token_len_per_seq existence check before slicing masks and moved node retrieval inside guarded block to prevent missing-key access

* Temp. fix for test requant errors

* Update to OV ggml-ci to low-perf

* ci : temporary disable "test-llama-archs"

* ci : cache v4 -> v5, checkout v4 -> v6, fix runner tag

* docs : update url

* Fix OV link in docker and Update docs

---------

Co-authored-by: Ravi Panchumarthy <ravi.panchumarthy@intel.com>
Co-authored-by: Cavus Mustafa <mustafa.cavus@intel.com>
Co-authored-by: Arshath <arshath.ramzan@intel.com>
Co-authored-by: XuejunZhai <Xuejun.Zhai@intel.com>
Co-authored-by: Yamini Nimmagadda <yamini.nimmagadda@intel.com>
Co-authored-by: Xuejun Zhai <Xuejun.Zhai@intel>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-03-14 07:56:55 +02:00

22 KiB

koboldcpp

KoboldCpp is an easy-to-use AI text-generation software for GGML and GGUF models, inspired by the original KoboldAI. It's a single self-contained distributable that builds off llama.cpp and adds many additional powerful features. Download Releases Here.

Preview Preview Preview Preview Preview Preview

Features

  • Single file executable, with no installation required and no external dependencies
  • Runs on CPU or GPU, supports full or partial offloaded
  • LLM text generation (Supports all GGML and GGUF models, backwards compatibility with ALL past models)
  • Image Generation and Image Editing (Stable Diffusion 1.5, SDXL, SD3, Flux, Qwen Image, Z-Image, Klein)
  • Video Generation (WAN 2.2)
  • Speech-To-Text (Voice Recognition) via Whisper
  • Text-To-Speech (Voice Generation) via Qwen3TTS, Kokoro, OuteTTS, Parler and Dia
  • Music Generation (Ace Step 1.5)
  • Image Recognition (Multimodal Vision)
  • MCP Server support and tool calling
  • Provides many compatible APIs endpoints for many popular webservices (KoboldCppApi OpenAiApi OllamaApi A1111ForgeApi ComfyUiApi WhisperTranscribeApi XttsApi OpenAiSpeechApi)
  • Bundled KoboldAI Lite UI with editing tools, save formats, memory, world info, author's note, characters, scenarios.
  • Includes multiple modes (chat, adventure, instruct, storywriter) and UI Themes (aesthetic roleplay, classic writer, corporate assistant, messsenger)
  • Supports loading Tavern Character Cards, importing many different data formats from various sites, reading or exporting JSON savefiles and persistent stories.
  • Many other features including new samplers, regex support, websearch, RAG via TextDB, image recognition/vision and more.
  • Ready-to-use binaries for Windows, MacOS, Linux. Runs directly with Colab, Docker, also supports other platforms if self-compiled (like Android (via Termux) and Raspberry PI).
  • Need help finding a model? Read this!
  • Windows binaries are provided in the form of koboldcpp.exe, which is a pyinstaller wrapper containing all necessary files. Download the latest koboldcpp.exe release here
  • To run, simply execute koboldcpp.exe.
  • Launching with no command line arguments displays a GUI containing a subset of configurable settings. Generally you dont have to change much besides the Presets and GPU Layers. Read the --help for more info about each settings.
  • Obtain and load a GGUF model. See here
  • By default, you can connect to http://localhost:5001
  • You can also run it using the command line. For info, please check koboldcpp.exe --help

On modern Linux systems, you should download the koboldcpp-linux-x64 prebuilt PyInstaller binary on the releases page. Simply download and run the binary (You may have to chmod +x it first). If you have an older device, you can also try the koboldcpp-linux-x64-oldpc instead for greatest compatibility.

Alternatively, you can also install koboldcpp to the current directory by running the following terminal command:

curl -fLo koboldcpp https://github.com/LostRuins/koboldcpp/releases/latest/download/koboldcpp-linux-x64-oldpc && chmod +x koboldcpp

After running this command you can launch Koboldcpp from the current directory using ./koboldcpp in the terminal (for CLI usage, run with --help). Finally, obtain and load a GGUF model. See here

MacOS (Precompiled Binary)

  • PyInstaller binaries for Modern ARM64 MacOS (M1, M2, M3) are now available! Simply download the MacOS binary
  • In a MacOS terminal window, set the file to executable chmod +x koboldcpp-mac-arm64 and run it with ./koboldcpp-mac-arm64.
  • In newer MacOS you may also have to whitelist it in security settings if it's blocked. Here's a video guide.
  • Alternatively, or for older x86 MacOS computers, you can clone the repo and compile from source code, see Compiling for MacOS below.
  • Finally, obtain and load a GGUF model. See here

Run on Colab

  • KoboldCpp now has an official Colab GPU Notebook! This is an easy way to get started without installing anything in a minute or two. Try it here!.
  • Note that KoboldCpp is not responsible for your usage of this Colab Notebook, you should ensure that your own usage complies with Google Colab's terms of use.

Run on RunPod

  • KoboldCpp can now be used on RunPod cloud GPUs! This is an easy way to get started without installing anything in a minute or two, and is very scalable, capable of running 70B+ models at afforable cost. Try our RunPod image here!.

Docker

Obtaining a GGUF model

Improving Performance

  • GPU Acceleration: If you're on Windows with an Nvidia GPU you can get CUDA support out of the box using the --usecuda flag (Nvidia Only), or --usevulkan (Any GPU), make sure you select the correct .exe with CUDA support.
  • GPU Layer Offloading: Add --gpulayers to offload model layers to the GPU. The more layers you offload to VRAM, the faster generation speed will become. Experiment to determine number of layers to offload, and reduce by a few if you run out of memory.
  • Increasing Context Size: Use --contextsize (number) to increase context size, allowing the model to read more text. Note that you may also need to increase the max context in the KoboldAI Lite UI as well (click and edit the number text field).
  • Old CPU Compatibility: If you are having crashes or issues, you can try running in a non-avx2 compatibility mode by adding the --noavx2 flag. You can also try reducing your --blasbatchssize (set -1 to avoid batching)

For more information, be sure to run the program with the --help flag, or check the wiki.

Compiling KoboldCpp From Source Code

Compiling on Linux (Using koboldcpp.sh automated compiler script)

when you can't use the precompiled binary directly, we provide an automated build script which uses conda to obtain all dependencies, and generates (from source) a ready-to-use a pyinstaller binary for linux users.

  • Clone the repo with git clone https://github.com/LostRuins/koboldcpp.git
  • Simply execute the build script with ./koboldcpp.sh dist and run the generated binary. (Not recommended for systems that already have an existing installation of conda. Dependencies: curl, bzip2)
./koboldcpp.sh # This launches the GUI for easy configuration and launching (X11 required).
./koboldcpp.sh --help # List all available terminal commands for using Koboldcpp, you can use koboldcpp.sh the same way as our python script and binaries.
./koboldcpp.sh rebuild # Automatically generates a new conda runtime and compiles a fresh copy of the libraries. Do this after updating Koboldcpp to keep everything functional.
./koboldcpp.sh dist # Generate your own precompiled binary (Due to the nature of Linux compiling these will only work on distributions equal or newer than your own.)

Compiling on Linux (Manual Method)

  • To compile your binaries from source, clone the repo with git clone https://github.com/LostRuins/koboldcpp.git
  • A makefile is provided, simply run make (when compiling, you can set the number of parallel jobs with the -j flag).
  • Optional Vulkan: Link your own install of Vulkan SDK manually with make LLAMA_VULKAN=1
  • You can attempt a CuBLAS build with LLAMA_CUBLAS=1, (or LLAMA_HIPBLAS=1 for AMD). You will need CUDA Toolkit installed. Some have also reported success with the CMake file, though that is more for windows.
  • For a full featured build (all backends), do make LLAMA_CUBLAS=1 LLAMA_VULKAN=1. (Note that LLAMA_CUBLAS=1 will not work on windows, you need visual studio)
  • To make your build sharable and capable of working on other devices, you must use LLAMA_PORTABLE=1
  • After all binaries are built, you can run the python script with the command python koboldcpp.py [ggml_model.gguf] [port]

Compiling on Windows

  • You're encouraged to use the .exe released, but if you want to compile your binaries from source at Windows, the easiest way is:
    • Get the latest release of w64devkit (https://github.com/skeeto/w64devkit). Be sure to use the "vanilla one", not i686 or other different stuff. If you try they will conflit with the precompiled libs!
    • Clone the repo with git clone https://github.com/LostRuins/koboldcpp.git
    • Make sure you are using the w64devkit integrated terminal, then run make at the KoboldCpp source folder. This will create the .dll files for a pure CPU native build (when compiling, you can set the number of parallel jobs with the -j flag).
    • For a GPU build (all backends), do make LLAMA_VULKAN=1. (Note that LLAMA_CUBLAS=1 will not work on windows, you need visual studio)
    • To make your build sharable and capable of working on other devices, you must use LLAMA_PORTABLE=1
    • If you want to generate the .exe file, make sure you have the python module PyInstaller installed with pip (pip install PyInstaller). Then run the script make_pyinstaller.bat
    • The koboldcpp.exe file will be at your dist folder.
  • Building with CUDA: Visual Studio, CMake and CUDA Toolkit is required. Clone the repo, then open the CMake file and compile it in Visual Studio. Copy the koboldcpp_cublas.dll generated into the same directory as the koboldcpp.py file. If you are bundling executables, you may need to include CUDA dynamic libraries (such as cublasLt64_11.dll and cublas64_11.dll) in order for the executable to work correctly on a different PC.
  • Replacing Libraries (Not Recommended): If you wish to use your own version of the additional Windows libraries (Vulkan), you can do it with:
    • Move the respectives .lib files to the /lib folder of your project, overwriting the older files.
    • Also, replace the existing versions of the corresponding .dll files located in the project directory root.
    • Make the KoboldCpp project using the instructions above.

Compiling on MacOS

  • You can compile your binaries from source. You can clone the repo with git clone https://github.com/LostRuins/koboldcpp.git
  • A makefile is provided, simply run make (when compiling, you can set the number of parallel jobs with the -j flag).
  • If you want Metal GPU support, instead run make LLAMA_METAL=1, note that MacOS metal libraries need to be installed.
  • To make your build sharable and capable of working on other devices, you must use LLAMA_PORTABLE=1
  • After all binaries are built, you can run the python script with the command python koboldcpp.py --model [ggml_model.gguf] (and add --gpulayers (number of layer) if you wish to offload layers to GPU).

Compiling on OpenBSD

  • Clone the repo with git clone https://github.com/LostRuins/koboldcpp.git
  • the project uses Gnu Makefile format, so you will need gmake: pkg_add gmake
  • compiling vulkan support
    • you will require libvulkan, this is included in the vulkan-loader package, which is a dependency of the vulkan-tools package: pkg_add vulkan-tools or pkg_add vulkan-loader
    • you will require glslc, this is incliuded in the shaderc package: pkg_add shaderc
    • if your gmake terminates with "fatal error: 'ggml-vulkan-shaders.hpp' file not found" the problem is probably that glslc is not installed. See above.
    • OpenBSD's default datasize limit may prevent compiliation ulimit -d 8388608 should work
    • compile using gmake LLAMA_VULKAN=1
  • After all binaries are built, you can run the python script with the command python3 koboldcpp.py --model [ggml_model.gguf]

Compiling on Android (Termux Installation)

Termux Quick Setup Script (Easy Setup)

  • You can use this auto-installation script to quickly install and build everything and launch KoboldCpp with a model. Simply run:
curl -sSL https://raw.githubusercontent.com/LostRuins/koboldcpp/concedo/android_install.sh | sh

and it will install everything required. Alternatively, you can download the above android_install.sh script to file, then do chmod +x and run it interactively.

Termux Manual Instructions (DIY Setup)

  • Open termux and run the command apt update
  • Install dependency apt install openssl
  • Install other dependencies with pkg install wget git python
  • Run pkg upgrade
  • Clone the repo git clone https://github.com/LostRuins/koboldcpp.git
  • Navigate to the koboldcpp folder cd koboldcpp
  • Build the project make
  • To make your build sharable and capable of working on other devices, you must use LLAMA_PORTABLE=1, this disables usage of ARM instrinsics.
  • Grab a small GGUF model, such as wget https://huggingface.co/concedo/KobbleTinyV2-1.1B-GGUF/resolve/main/KobbleTiny-Q4_K.gguf
  • Start the python server python koboldcpp.py --model KobbleTiny-Q4_K.gguf
  • Connect to http://localhost:5001 on your mobile browser
  • If you encounter any errors, make sure your packages are up-to-date with pkg up and pkg upgrade
  • If you have trouble installing an dependency, you can try the command termux-change-repo and choose a different repo (e.g. Mirror by BFSU)
  • GPU acceleration for Termux may be possible but I have not explored it. If you find a good cross-device solution, do share or PR it.

AMD Users

  • For most users, you can get very decent speeds by selecting the Vulkan option instead, which supports both Nvidia and AMD GPUs.
  • Alternatively, you can try the ROCM fork at https://github.com/YellowRoseCx/koboldcpp-rocm though this may be outdated.

Third Party Resources

  • These unofficial resources have been contributed by the community, and may be outdated or unmaintained. No official support will be provided for them!
  • GPTLocalhost - KoboldCpp is supported by GPTLocalhost, a local Word Add-in for you to use KoboldCpp in Microsoft Word. A local alternative to "Copilot in Word."

Questions and Help Wiki

  • First, please check out The KoboldCpp FAQ and Knowledgebase which may already have answers to your questions! Also please search through past issues and discussions.
  • If you cannot find an answer, open an issue on this github, or find us on the KoboldAI Discord.

KoboldCpp and KoboldAI API Documentation

KoboldCpp Public Demo

Considerations

  • For Windows: No installation, single file executable, (It Just Works)
  • Since v1.15, requires CLBlast if enabled, the prebuilt windows binaries are included in this repo. If not found, it will fall back to a mode without CLBlast.
  • Since v1.33, you can set the context size to be above what the model supports officially. It does increases perplexity but should still work well below 4096 even on untuned models. (For GPT-NeoX, GPT-J, and Llama models) Customize this with --ropeconfig.
  • Since v1.42, supports GGUF models for LLAMA and Falcon
  • Since v1.55, lcuda paths on Linux are hardcoded and may require manual changes to the makefile if you do not use koboldcpp.sh for the compilation.
  • Since v1.60, provides native image generation with StableDiffusion.cpp, you can load any SD1.5 or SDXL .safetensors model and it will provide an A1111 compatible API to use.
  • I try to keep backwards compatibility with ALL past llama.cpp models. But you are also encouraged to reconvert/update your models if possible for best results.
  • Since v1.75, openblas has been deprecated and removed in favor of the native CPU implementation.
  • Since v1.107, CLBlast has been deprecated and removed in favor of Vulkan.

License

Notes

  • If you wish, after building the koboldcpp libraries with make, you can rebuild the exe yourself with pyinstaller by using make_pyinstaller.bat
  • API documentation available at /api (e.g. http://localhost:5001/api) and https://lite.koboldai.net/koboldcpp_api. An OpenAI compatible API is also provided at /v1 route (e.g. http://localhost:5001/v1).
  • All up-to-date GGUF models are supported, and KoboldCpp also includes backward compatibility for older versions/legacy GGML .bin models, though some newer features might be unavailable.
  • An incomplete list of architectures is listed, but there are many hundreds of other GGUF models. In general, if it's GGUF, it should work.
  • Llama / Llama2 / Llama3 / Alpaca / GPT4All / Vicuna / Koala / Pygmalion / Metharme / WizardLM / Mistral / Mixtral / Miqu / Qwen / Qwen2 / Yi / Gemma / Gemma2 / GPT-2 / Cerebras / Phi-2 / Phi-3 / GPT-NeoX / Pythia / StableLM / Dolly / RedPajama / GPT-J / RWKV4 / MPT / Falcon / Starcoder / Deepseek and many, many more.

Where can I download AI model files?