* UI: implement basic UI components
* util: implement performance monitor; wrap it with a viewmodel
* util: implement user preferences utility
* UI: implement core flow's screens
* UI: add a new MainActivity; update manifest
* [WIP] DI: implement simple local vm factory provider
* UI: disable triggering drawer via gesture; enable alert dialog on back navigation inside conversation and benchmark
* UI: allow drawer's gesture control only on Home and Settings screens; enable alert dialog on back navigation inside conversation and benchmark
* UI: split a nested parent settings screen into separate child settings screens
* UI: polish system prompt setup UI
* Deps: bump Kotlin plugin; introduce KSP; apply in :app subproject
* DB: setup Room database
* data: introduce repo for System Prompt; flow data from Room to VM
* bugfix: properly handle user's quitting conversation screen while tokens in generation
* UI: rename `ModeSelection` to `ModelLoading` for better clarity
* UI: update app name to be more Arm
* UI: polish conversation screen
* data: code polish
* UI: code polish
* bugfix: handle user quitting on model loading
* UI: locks user in alert dialog when model is unloading
* vm: replace token metrics stubs with actual implementation
* UI: refactor top app bars
* nit: combine temperatureMetrics and useFahrenheit
* DI: introduce Hilt plugin + processor + lib dependencies
* DI: make app Hilt injectable
* DI: make viewmodels Hilt injectable
* DI: replace manual DI with Hilt DI
* UI: optimize AppContent's composing
* bugfix: wait for model to load before navigating to benchmark screen; use NavigationActions instead of raw navController
* UI: navigation with more natural animated transitions
* DI: Optimize AppModule
* Feature: Introduce ModelRepository and ModelsManagementViewModel; update AppModule
* UI: polish UI for ModelsManagementScreen; inject ModelsManagementVieModel
* DI: abstract the protocol of SystemPromptRepository; update AppModule
* data: [WIP] prepare for ModelRepository refactor & impl
* data: introduce Model entity and DAO; update DI module
* UI: replace Models Management screen's stubbing with instrumentation
* UI: polish sort order menu
* data: import local model with file picker
* bugfix: use List instead of Collection for ModelDao's deletion
* data: add a util file for extracting file name & size and model metadata
* UI: enrich ModelManagementState; extract filename to show correct importing UI
* UI: implement multiple models deletion; update Models Management screen
* UI: handle back navigation when user is in multi-selection mode
* util: extract file size formatting into ModelUtils
* UI: add a confirmation step when user picks a file; refactor model import overlay into AlertDialog
* UI: extract a shared ModelCard component
* UI: replace model selection screen's data stubbing; add empty view
* nit: tidy SystemPromptViewModel
* Util: split FileUtils from ModelUtils; extract copy methods into FileUtils
* data: pass through getModelById from ModelDao into ModelRepository
* core: extract conversation and benchmark logics into InferenceManager; add logs and missing state updates in stub InferenceEngine
* vm: split mono MainViewModel into separate individual ViewModels
* vm: merge SystemPromptViewModel into ModelLoadingViewModel
* core: break down InferenceManager due to Interface Segregation Principle
* UI: show model card in Model Loading screen
* UI: show model card in Conversation screen
* UI: unify Model Card components
* core: swap in LLamaAndroid and mark stub engine for testing only
* data: allow canceling the ongoing model import
* UI: update UI ongoing model import's cancellation
* LLama: update engine state after handling the cancellation of sendUserPrompt
* VM: handle the cancellation of ongoing token generation
* LLama: refactor loadModel by splitting the system prompt setting into a separate method
* feature: check for available space before copying local model
* UI: centralize the AppScaffold and modularize its configs
* UI: refactor BottomBarConfig.ModelsManagement APIs
* UI: combine TopBarConfig and BottomBarConfig into each route's ScaffoldConfig
* UI: replace ugly optional as casts in AppScaffold with extension functions
* UI: fix the typo `totalGb` in `StorageMetrics`
* UI: remove code duplication in sort menu
* LLama: add ModelUnloadingState to engine State; add missing state checks in stub engine; fix instrumentation engine's error messages
* UI: refactor back handling by removing centralized BackHandlerSetup and UnloadModelConfirmationDialog from AppContent
* UI: implement BenchmarkScreen's individual back handling
* LLama: add a new Initializing state; ; add two extension properties; rename LibraryLoaded state to Initialized
* UI: Introduce an abstract ViewModel to handle additional model unloading logics
* UI: expose a single facade ModelUnloadDialogHandler; move UnloadModelState into ModelUnloadingViewModel.kt
* UI: migrate ModelLoadingScreen onto ModelLoadingViewModel; update & refine ModelLoadingScreen
* UI: migrate ConversationViewModel onto ModelLoadingViewModel; update & refine ConversationScreen
* nit: extract app name into a constant value; remove unused onBackPressed callbacks
* UI: update AppContent to pass in correct navigation callbacks
* nit: polish ModelLoadingScreen UI
* core: throw Exception instead of returning null if model fails to load
* navigation: sink model loading state management from AppContent down into ModelLoadingScreen; pass ModelLoadingMetrics to Benchmark and Conversation screens
* gguf: add GGUF metadata data holder and its corresponding extractor implementation
* DB: introduce Kotlin serialization extension's library and plugin; add Room runtime library
* GGUF: make GgufMetadata serializable in order to be compatible with Room
* nit: refactor data.local package structure
* nit: rename lastUsed field to dateLastUsed; add dateAdded field
* UI: refactor ModelCard UI to show GGUF metadata
* UI: update ModelSelectionScreen with a preselect mechanism
* UI: polish model card
* nit: allow deselect model on Model Selection screen
* nit: revert accidental committing of debug code
* UI: polish ModelLoading screen
* util: extract formatting helper functions from FileUtils into a new FormatUtils
* UI: polish model cards on Benchmark and Conversation screens to show model loading metrics
* UI: show a Snack bar to warn user that system prompt is not always supported
* UI: handle back press on Model Selection screen
* UI: finally support theme modes; remove hardcoded color schemes, default to dynamic color scheme implementation
* feature: support searching on Model Selection screen
* nit: move scaffold related UI components into a separate package
* UI: extract InfoView out into a separate file for reusability
* data: move Model related actions (query, filter, sort) into ModelInfo file
* UI: animate FAB on model preselection states
* feature: support filtering in Model Management screen
* ui: show empty models info in Model Management screen
* ui: add filter off icon to "Clear filters" menu item
* [WIP] ui: polish Benchmark screen; implement its bottom app bar
* ui: polish Benchmark screen; implement its bottom app bar's rerun and share
* nit: disable mode selection's radio buttons when loading model
* feature: implement Conversation screen's bottom app bar
* pkg: restructure BottomAppBars into separate files in a child package
* pkg: restructure TopBarApps into separate files in a child package
* pkg: restructure system metrics into a separate file
* UI: polish Conversation screen
* data: update system prompt presets
* UI: allow hide or show model card on Conversation & Benchmark screens; fix message arrangement
* data: update & enhance system prompt presets
* deps: introduce Retrofit2
* data: implement HuggingFace data model, data source with Retrofit API
* data: update Model data repository to support fetching HuggingFace models
* [WIP] UI: replace the HuggingFace stub in Model Management screen with actual API call
* UI: map language codes into country Emojis
* ui: add "clear results" action to Benchmark screen
* nit: print current pp & tg in llama-bench
* UI: disable landscape mode; prevent duplicated benchmark running
* llama: migrate C/CXX flags into CMakeList
* [WIP] llama: ABI split builds five .so artifacts.
However, all .so are performing on SVE level
* [WIP] llama: ABI split where five tiers are built sequentially.
* [WIP] llama: disable OpenMP in ABI split since most SoCs are big.LITTLE
* [WIP] llama: enable KleidiAI and disable tier 4 due to `+sve+sve2` bug caused by `ggml_add_cpu_backend_variant_impl` as explained below
```CMake
if (NOT SME_ENABLED MATCHES -1)
...
set(PRIVATE_ARCH_FLAGS "-fno-tree-vectorize;${PRIVATE_ARCH_FLAGS}+sve+sve2")
...
```
* core: add Google's cpu_features as a submodule
* core: implement cpu_detector native lib
* core: swap out hardcoded LlamaAndroid library loading
* core: add back OpenMP due to huge perf loss on TG128
* misc: reorg the pkg structure
* misc: rename LlamaAndroid related class to InferenceEngine prefixes
* [WIP] lib: move GgufMetadata into the lib submodule
* lib: expose GgufMetadataReader as interface only
* lib: replace the naive & plain SharedPreferences with DataStore implementation
* lib: hide the internal implementations, only expose a facade and interfaces
* lib: expose Arm features
* di: add a stub TierDetection; provide both actual impl and stub in AppModule
* UI: add visualizer UI for Arm features
* misc: UI polish
* lib: refactored InferenceEngineLoader; added a `NONE` Llama Tier
* UI: support `NONE` Llama Tier in general settings
* lib: optimize engine loader; always perform a fresh detection when cache is null
* remote: add HuggingFaceModelDetails data class
* remote: refine HuggingFaceModel data class
* nit: remove `trendingScore` field from HuggingFace model entities, weird...
* remote: refactor HuggingFaceApiService; implement download feature in HuggingFaceRemoteDataSource
* remote: fix the incorrect parse of HuggingFace's inconsistent & weird JSON response
* UI: scaffold Models Management screen and view model
* UI: implement a dialog UI to show fetched HuggingFace models.
* UI: use a broadcast receiver to listen for download complete events and show local import dialog.
* data: handle network exceptions elegantly
* pkg: restructure `data`'s packages
* data: extract local file info, copy and cleanup logics into LocalFileDataSource
* nit: minor UI patch; add missing comments
* bugfix: tapping "Home" in navigation drawer should simply close it without any navigation action.
* UI: improve autoscroll during token generation
* lib: tested on JFrog Artifactory for Maven publishing
* UI: show RAM warning if model too large
* UI: polish model management screen's error dialog
* util: add more items into the mapping table of ISO 639-1 language code to ISO 3166-1 country code
* llm: properly propagate error to UI upon failing to load selected model
* UI: avoid duplicated calculation of token metrics
* lib: read & validate the magic number from the picked source file before executing the import
* UI: add "Learn More" hyperlinks to Error dialog upon model import failures
* lib: refactor the GgufMetadataReader to take InputStream instead of absolute path as argument
* lib: fix the `SIMD` typo in Tier description
* core: verify model file path is readable
* lib: add UnsupportedArchitectureException for triaged error message
* util: split FormatUtils into multiple utils for better readability
* UI: change benchmark screen from raw markdown to table view
* bugfix: reset preselection upon running the preselected model
* misc: linter issue
* bugfix: fix the malfunctioning monitoring switch
* UI: update Arm features indicator; fix the broken hyperlinks
* UI: add quick action buttons to benchmark screen's result card
* UI: hide share fab after clearing all benchmark results
* UI: fix the model unload dialog message; elevate the model card and hide it by default on Conversation screen;
* UI: hide the stubbing actions in Conversation screen
* UI: add show/hide stats control to conversation screen's assistant message bubble; fix placeholder
* UI: add a info button to explain token metrics
* misc: remove the redundant `Companion` added due to refactoring
* UI: show corresponding system metrics detailed info upon tapping RAM / storage / temperature indicator
* UI: add info button to System Prompt switch; expand the model card by default
* UI: disable tag & language chips; add section headers to explain what they are
* misc: replace top bar indicator's spacer with padding
* UI: merge the Model Selection and Model Management into a unified Models screen
* UI: split the ModelsManagementViewModel from a unified ModelsViewModel due to huge complexity
* UI: add model loading in progress view; polish the empty model info view
* UI: polish the bottom bars and info view when no models found; show loading in progress while fetching models
* build: [BREAKING] bump the versions of libraries and plugins
* UI: fix the breaking build
* UI: add Tooltip on Import FAB for user onboarding
* UI: adds AppPreferences to track user onboarding status
* UI: tracks user's first success on importing a model
* data: add hand crafted rules to filter the models fetched from HuggingFace API
* UI: update app name & about; polish top bars' indicators & buttons
* UI: polish Hugging Face download dialog UI
* UX: implement onboarding tooltips for model import and onboarding
* misc: use sentence case for CTA button labels
* [WIP] UI: add Arm color palette from Philip.Watson3
* UI: address Rojin's UX feedbacks
* UI: address Rojin's UX feedbacks - part 2
* UI: update Arm color palette from Philip.Watson3
* data: make sure fetch preselected models in the same order of their IDs
* UI: fix UI issues in the generic settings screen and navigation drawer
* nit: address Rojin's feedbacks on model import message again
* nit: append `®` to all `Arm` labels
* UI: extract a reusable InfoAlertDialog
* core: support GGML_CPU_ALL_VARIANTS on Android!
* core: restructure Kleidi-Llama library
* core: organizing cmake arguments
* data: sort preselected models according to device's available RAM
* app: update adaptive + themed + legacy icons and app name
* UI: fix the font size auto scaling for ArmFeaturesVisualizer
* core: further improve the performance on native methods
* UI: minor color palette changes; emphasize the bottom bar FABs; fix Settings Screen menu item label
* UI: make more room for assistant message bubble's width
* UI: better usage of tertiary colors to highlight model cards but not for warnings
* UI: fix the layout issue on large font sizes
* lib: support x86-64 by dynamically set Arm related definitions
* lib: replace the factory pattern for deprecated tiered lib loading with single instance pattern
* llama: update the library name in JNI and CMake project
* llama: update the library's package name and namespace
* llama: update the app's package name and namespace
* app: bump ksp version
* app: remove deprecated SystemUIController from accompanist by migrating to EdgeToEdge
* app: extract AppContent from MainActivity to a separate file in ui package
* lib: add File version for GGUF Magic number verification
* lib: perform engine state check inclusively instead of exclusively
* lib: change `LlamaTier` to `ArmCpuTier`
* lib: remove kleidi-llama related namings
* cleanup: remove Arm AI Chat/Playground app source code; replace with the basic sample app from https://github.com/hanyin-arm/Arm-AI-Chat-Sample
Note: the full Google Play version of AI Chat app will be open will be open sourced in another repo soon, therefore didn't go through the trouble of pruning the history using `git filter-repo` here.
* [WIP] doc: update main and Android README docs; add self to code owners
* lib: revert System.load back to System.loadLibrary
* jni: introduce a logging util to filter different logging levels on different build types
* lib: enable app optimization
* doc: replace stub Google Play app URL with the actual link add screenshots; add my GitHub ID to maintainer list
* Remove cpu_features
* Fix linters issues in editorconfig-checker job
https://github.com/ggml-org/llama.cpp/actions/runs/19548770247/job/55974800633?pr=17413
* Remove unnecessary Android CMake flag
* purge include/cpu_features directory
---------
Co-authored-by: Han Yin <han.yin@arm.com>
21 KiB
koboldcpp
KoboldCpp is an easy-to-use AI text-generation software for GGML and GGUF models, inspired by the original KoboldAI. It's a single self-contained distributable that builds off llama.cpp and adds many additional powerful features.
Features
- Single file executable, with no installation required and no external dependencies
- Runs on CPU or GPU, supports full or partial offloaded
- LLM text generation (Supports all GGML and GGUF models, backwards compatibility with ALL past models)
- Image Generation (Stable Diffusion 1.5, SDXL, SD3, Flux)
- Speech-To-Text (Voice Recognition) via Whisper
- Text-To-Speech (Voice Generation) via OuteTTS, Kokoro, Parler and Dia
- Provides many compatible APIs endpoints for many popular webservices (KoboldCppApi OpenAiApi OllamaApi A1111ForgeApi ComfyUiApi WhisperTranscribeApi XttsApi OpenAiSpeechApi)
- Bundled KoboldAI Lite UI with editing tools, save formats, memory, world info, author's note, characters, scenarios.
- Includes multiple modes (chat, adventure, instruct, storywriter) and UI Themes (aesthetic roleplay, classic writer, corporate assistant, messsenger)
- Supports loading Tavern Character Cards, importing many different data formats from various sites, reading or exporting JSON savefiles and persistent stories.
- Many other features including new samplers, regex support, websearch, RAG via TextDB, image recognition/vision and more.
- Ready-to-use binaries for Windows, MacOS, Linux. Runs directly with Colab, Docker, also supports other platforms if self-compiled (like Android (via Termux) and Raspberry PI).
- Need help finding a model? Read this!
Windows Usage (Precompiled Binary, Recommended)
- Windows binaries are provided in the form of koboldcpp.exe, which is a pyinstaller wrapper containing all necessary files. Download the latest koboldcpp.exe release here
- To run, simply execute koboldcpp.exe.
- Launching with no command line arguments displays a GUI containing a subset of configurable settings. Generally you dont have to change much besides the
PresetsandGPU Layers. Read the--helpfor more info about each settings. - Obtain and load a GGUF model. See here
- By default, you can connect to http://localhost:5001
- You can also run it using the command line. For info, please check
koboldcpp.exe --help
Linux Usage (Precompiled Binary, Recommended)
On modern Linux systems, you should download the koboldcpp-linux-x64 prebuilt PyInstaller binary on the releases page. Simply download and run the binary (You may have to chmod +x it first). If you have an older device, you can also try the koboldcpp-linux-x64-oldpc instead for greatest compatibility.
Alternatively, you can also install koboldcpp to the current directory by running the following terminal command:
curl -fLo koboldcpp https://github.com/LostRuins/koboldcpp/releases/latest/download/koboldcpp-linux-x64-oldpc && chmod +x koboldcpp
After running this command you can launch Koboldcpp from the current directory using ./koboldcpp in the terminal (for CLI usage, run with --help).
Finally, obtain and load a GGUF model. See here
MacOS (Precompiled Binary)
- PyInstaller binaries for Modern ARM64 MacOS (M1, M2, M3) are now available! Simply download the MacOS binary
- In a MacOS terminal window, set the file to executable
chmod +x koboldcpp-mac-arm64and run it with./koboldcpp-mac-arm64. - In newer MacOS you may also have to whitelist it in security settings if it's blocked. Here's a video guide.
- Alternatively, or for older x86 MacOS computers, you can clone the repo and compile from source code, see Compiling for MacOS below.
- Finally, obtain and load a GGUF model. See here
Run on Colab
- KoboldCpp now has an official Colab GPU Notebook! This is an easy way to get started without installing anything in a minute or two. Try it here!.
- Note that KoboldCpp is not responsible for your usage of this Colab Notebook, you should ensure that your own usage complies with Google Colab's terms of use.
Run on RunPod
- KoboldCpp can now be used on RunPod cloud GPUs! This is an easy way to get started without installing anything in a minute or two, and is very scalable, capable of running 70B+ models at afforable cost. Try our RunPod image here!.
Run on Novita AI
KoboldCpp can now also be run on Novita AI, a newer alternative GPU cloud provider which has a quick launch KoboldCpp template for as well. Check it out here!
Docker
- The official docker can be found at https://hub.docker.com/r/koboldai/koboldcpp
- If you're building your own docker, remember to enable LLAMA_PORTABLE
Obtaining a GGUF model
- KoboldCpp uses GGUF models. They are not included with KoboldCpp, but you can download GGUF files from other places such as Bartowski's Huggingface. Search for "GGUF" on huggingface.co for plenty of compatible models in the
.ggufformat. - For beginners, we recommend the models L3-8B-Stheno-v3.2 (smaller and weaker) or Tiefighter 13B (old but very versatile model) or Gemma-3-27B Abliterated (largest and most powerful)
- Alternatively, you can download the tools to convert models to the GGUF format yourself here. Run
convert-hf-to-gguf.pyto convert them, thenquantize_gguf.exeto quantize the result. - Other models for Whisper (speech recognition), Image Generation, Text to Speech or Image Recognition can be found on the Wiki
Improving Performance
- GPU Acceleration: If you're on Windows with an Nvidia GPU you can get CUDA support out of the box using the
--usecudaflag (Nvidia Only), or--usevulkan(Any GPU), make sure you select the correct .exe with CUDA support. - GPU Layer Offloading: Add
--gpulayersto offload model layers to the GPU. The more layers you offload to VRAM, the faster generation speed will become. Experiment to determine number of layers to offload, and reduce by a few if you run out of memory. - Increasing Context Size: Use
--contextsize (number)to increase context size, allowing the model to read more text. Note that you may also need to increase the max context in the KoboldAI Lite UI as well (click and edit the number text field). - Old CPU Compatibility: If you are having crashes or issues, you can try running in a non-avx2 compatibility mode by adding the
--noavx2flag. You can also try reducing your--blasbatchssize(set -1 to avoid batching)
For more information, be sure to run the program with the --help flag, or check the wiki.
Compiling KoboldCpp From Source Code
Compiling on Linux (Using koboldcpp.sh automated compiler script)
when you can't use the precompiled binary directly, we provide an automated build script which uses conda to obtain all dependencies, and generates (from source) a ready-to-use a pyinstaller binary for linux users.
- Clone the repo with
git clone https://github.com/LostRuins/koboldcpp.git - Simply execute the build script with
./koboldcpp.sh distand run the generated binary. (Not recommended for systems that already have an existing installation of conda. Dependencies: curl, bzip2)
./koboldcpp.sh # This launches the GUI for easy configuration and launching (X11 required).
./koboldcpp.sh --help # List all available terminal commands for using Koboldcpp, you can use koboldcpp.sh the same way as our python script and binaries.
./koboldcpp.sh rebuild # Automatically generates a new conda runtime and compiles a fresh copy of the libraries. Do this after updating Koboldcpp to keep everything functional.
./koboldcpp.sh dist # Generate your own precompiled binary (Due to the nature of Linux compiling these will only work on distributions equal or newer than your own.)
Compiling on Linux (Manual Method)
- To compile your binaries from source, clone the repo with
git clone https://github.com/LostRuins/koboldcpp.git - A makefile is provided, simply run
make(when compiling, you can set the number of parallel jobs with the-jflag). - Optional Vulkan: Link your own install of Vulkan SDK manually with
make LLAMA_VULKAN=1 - Optional CLBlast: Link your own install of CLBlast manually with
make LLAMA_CLBLAST=1 - Note: for these you will need to obtain and link OpenCL and CLBlast libraries.
- For Arch Linux: Install
cblasandclblast. - For Debian: Install
libclblast-dev.
- For Arch Linux: Install
- You can attempt a CuBLAS build with
LLAMA_CUBLAS=1, (orLLAMA_HIPBLAS=1for AMD). You will need CUDA Toolkit installed. Some have also reported success with the CMake file, though that is more for windows. - For a full featured build (all backends), do
make LLAMA_CLBLAST=1 LLAMA_CUBLAS=1 LLAMA_VULKAN=1. (Note thatLLAMA_CUBLAS=1will not work on windows, you need visual studio) - To make your build sharable and capable of working on other devices, you must use
LLAMA_PORTABLE=1 - After all binaries are built, you can run the python script with the command
python koboldcpp.py [ggml_model.gguf] [port]
Compiling on Windows
- You're encouraged to use the .exe released, but if you want to compile your binaries from source at Windows, the easiest way is:
- Get the latest release of w64devkit (https://github.com/skeeto/w64devkit). Be sure to use the "vanilla one", not i686 or other different stuff. If you try they will conflit with the precompiled libs!
- Clone the repo with
git clone https://github.com/LostRuins/koboldcpp.git - Make sure you are using the w64devkit integrated terminal, then run
makeat the KoboldCpp source folder. This will create the .dll files for a pure CPU native build (when compiling, you can set the number of parallel jobs with the-jflag). - For a full featured build (all backends), do
make LLAMA_CLBLAST=1 LLAMA_VULKAN=1. (Note thatLLAMA_CUBLAS=1will not work on windows, you need visual studio) - To make your build sharable and capable of working on other devices, you must use
LLAMA_PORTABLE=1 - If you want to generate the .exe file, make sure you have the python module PyInstaller installed with pip (
pip install PyInstaller). Then run the scriptmake_pyinstaller.bat - The koboldcpp.exe file will be at your dist folder.
- Building with CUDA: Visual Studio, CMake and CUDA Toolkit is required. Clone the repo, then open the CMake file and compile it in Visual Studio. Copy the
koboldcpp_cublas.dllgenerated into the same directory as thekoboldcpp.pyfile. If you are bundling executables, you may need to include CUDA dynamic libraries (such ascublasLt64_11.dllandcublas64_11.dll) in order for the executable to work correctly on a different PC. - Replacing Libraries (Not Recommended): If you wish to use your own version of the additional Windows libraries (OpenCL, CLBlast, Vulkan), you can do it with:
- OpenCL - tested with https://github.com/KhronosGroup/OpenCL-SDK . If you wish to compile it, follow the repository instructions. You will need vcpkg.
- CLBlast - tested with https://github.com/CNugteren/CLBlast . If you wish to compile it you will need to reference the OpenCL files. It will only generate the ".lib" file if you compile using MSVC.
- Move the respectives .lib files to the /lib folder of your project, overwriting the older files.
- Also, replace the existing versions of the corresponding .dll files located in the project directory root (e.g. clblast.dll).
- Make the KoboldCpp project using the instructions above.
Compiling on MacOS
- You can compile your binaries from source. You can clone the repo with
git clone https://github.com/LostRuins/koboldcpp.git - A makefile is provided, simply run
make(when compiling, you can set the number of parallel jobs with the-jflag). - If you want Metal GPU support, instead run
make LLAMA_METAL=1, note that MacOS metal libraries need to be installed. - To make your build sharable and capable of working on other devices, you must use
LLAMA_PORTABLE=1 - After all binaries are built, you can run the python script with the command
python koboldcpp.py --model [ggml_model.gguf](and add--gpulayers (number of layer)if you wish to offload layers to GPU).
Compiling on Android (Termux Installation)
Termux Quick Setup Script (Easy Setup)
- You can use this auto-installation script to quickly install and build everything and launch KoboldCpp with a model. Simply run:
curl -sSL https://raw.githubusercontent.com/LostRuins/koboldcpp/concedo/android_install.sh | sh
and it will install everything required. Alternatively, you can download the above android_install.sh script to file, then do chmod +x and run it interactively.
Termux Manual Instructions (DIY Setup)
- Open termux and run the command
apt update - Install dependency
apt install openssl - Install other dependencies with
pkg install wget git python - Run
pkg upgrade - Clone the repo
git clone https://github.com/LostRuins/koboldcpp.git - Navigate to the koboldcpp folder
cd koboldcpp - Build the project
make - To make your build sharable and capable of working on other devices, you must use
LLAMA_PORTABLE=1, this disables usage of ARM instrinsics. - Grab a small GGUF model, such as
wget https://huggingface.co/concedo/KobbleTinyV2-1.1B-GGUF/resolve/main/KobbleTiny-Q4_K.gguf - Start the python server
python koboldcpp.py --model KobbleTiny-Q4_K.gguf - Connect to
http://localhost:5001on your mobile browser - If you encounter any errors, make sure your packages are up-to-date with
pkg upandpkg upgrade - If you have trouble installing an dependency, you can try the command
termux-change-repoand choose a different repo (e.g.Mirror by BFSU) - GPU acceleration for Termux may be possible but I have not explored it. If you find a good cross-device solution, do share or PR it.
AMD Users
- For most users, you can get very decent speeds by selecting the Vulkan option instead, which supports both Nvidia and AMD GPUs.
- Alternatively, you can try the ROCM fork at https://github.com/YellowRoseCx/koboldcpp-rocm though this may be outdated.
Third Party Resources
- These unofficial resources have been contributed by the community, and may be outdated or unmaintained. No official support will be provided for them!
- Arch Linux Packages: CUBLAS, and HIPBLAS.
- Unofficial Dockers: korewaChino and noneabove1182
- Nix & NixOS: KoboldCpp is available on Nixpkgs and can be installed by adding just
koboldcppto yourenvironment.systemPackages(or it can also be placed inhome.packages).- Example Nix Setup and further information
- If you face any issues with running KoboldCpp on Nix, please open an issue here.
- GPTLocalhost - KoboldCpp is supported by GPTLocalhost, a local Word Add-in for you to use KoboldCpp in Microsoft Word. A local alternative to "Copilot in Word."
Questions and Help Wiki
- First, please check out The KoboldCpp FAQ and Knowledgebase which may already have answers to your questions! Also please search through past issues and discussions.
- If you cannot find an answer, open an issue on this github, or find us on the KoboldAI Discord.
KoboldCpp and KoboldAI API Documentation
KoboldCpp Public Demo
Considerations
- For Windows: No installation, single file executable, (It Just Works)
- Since v1.15, requires CLBlast if enabled, the prebuilt windows binaries are included in this repo. If not found, it will fall back to a mode without CLBlast.
- Since v1.33, you can set the context size to be above what the model supports officially. It does increases perplexity but should still work well below 4096 even on untuned models. (For GPT-NeoX, GPT-J, and Llama models) Customize this with
--ropeconfig. - Since v1.42, supports GGUF models for LLAMA and Falcon
- Since v1.55, lcuda paths on Linux are hardcoded and may require manual changes to the makefile if you do not use koboldcpp.sh for the compilation.
- Since v1.60, provides native image generation with StableDiffusion.cpp, you can load any SD1.5 or SDXL .safetensors model and it will provide an A1111 compatible API to use.
- I try to keep backwards compatibility with ALL past llama.cpp models. But you are also encouraged to reconvert/update your models if possible for best results.
- Since v1.75, openblas has been deprecated and removed in favor of the native CPU implementation.
License
- The original GGML library, stable-diffusion.cpp and llama.cpp by ggerganov are licensed under the MIT License
- However, KoboldAI Lite is licensed under the AGPL v3.0 License
- KoboldCpp code and other files are also under the AGPL v3.0 License unless otherwise stated
- Llama.cpp source repo is at https://github.com/ggml-org/llama.cpp (MIT)
- Stable-diffusion.cpp source repo is at https://github.com/leejet/stable-diffusion.cpp (MIT)
- TTS.cpp source repo is at https://github.com/mmwillet/TTS.cpp (MIT)
- KoboldCpp source repo is at https://github.com/LostRuins/koboldcpp (AGPL)
- KoboldAI Lite source repo is at https://github.com/LostRuins/lite.koboldai.net (AGPL)
- For any further enquiries, contact @concedo on discord, or LostRuins on github.
Notes
- If you wish, after building the koboldcpp libraries with
make, you can rebuild the exe yourself with pyinstaller by usingmake_pyinstaller.bat - API documentation available at
/api(e.g.http://localhost:5001/api) and https://lite.koboldai.net/koboldcpp_api. An OpenAI compatible API is also provided at/v1route (e.g.http://localhost:5001/v1). - All up-to-date GGUF models are supported, and KoboldCpp also includes backward compatibility for older versions/legacy GGML
.binmodels, though some newer features might be unavailable. - An incomplete list of architectures is listed, but there are many hundreds of other GGUF models. In general, if it's GGUF, it should work.
- Llama / Llama2 / Llama3 / Alpaca / GPT4All / Vicuna / Koala / Pygmalion / Metharme / WizardLM / Mistral / Mixtral / Miqu / Qwen / Qwen2 / Yi / Gemma / Gemma2 / GPT-2 / Cerebras / Phi-2 / Phi-3 / GPT-NeoX / Pythia / StableLM / Dolly / RedPajama / GPT-J / RWKV4 / MPT / Falcon / Starcoder / Deepseek and many, many more.
Where can I download AI model files?
- The best place to get GGUF text models is huggingface. For image models, CivitAI has a good selection. Here are some to get started.
- Text Generation: L3-8B-Stheno-v3.2 (smaller and weaker) or Tiefighter 13B (old but very versatile model) or Gemma-3-27B Abliterated (largest and most powerful)
- Image Generation: Anything v3 or Deliberate V2 or Dreamshaper SDXL
- Image Recognition MMproj: Pick the correct one for your model architecture here
- Speech Recognition: Whisper models for Speech-To-Text
- Text-To-Speech: TTS models for Narration





