mirror of
https://github.com/kvcache-ai/ktransformers.git
synced 2026-04-28 03:39:48 +00:00
fix(llamafile): resolve deferred experts data race and update README (#1646)
Some checks are pending
Book-CI / test-1 (push) Waiting to run
Book-CI / test-2 (push) Waiting to run
Book-CI / test (push) Waiting to run
Deploy / deploy (macos-latest) (push) Waiting to run
Deploy / deploy (ubuntu-latest) (push) Waiting to run
Deploy / deploy (windows-latest) (push) Waiting to run
Some checks are pending
Book-CI / test-1 (push) Waiting to run
Book-CI / test-2 (push) Waiting to run
Book-CI / test (push) Waiting to run
Deploy / deploy (macos-latest) (push) Waiting to run
Deploy / deploy (ubuntu-latest) (push) Waiting to run
Deploy / deploy (windows-latest) (push) Waiting to run
This commit is contained in:
parent
51745a9ea1
commit
e7d1c1de09
3 changed files with 696 additions and 88 deletions
|
|
@ -2,45 +2,48 @@
|
|||
|
||||
High-performance kernel operations for KTransformers, featuring CPU-optimized MoE inference with AMX, AVX, KML and blis (amd library) support.
|
||||
|
||||
- [Note](#note)
|
||||
- [Features](#features)
|
||||
- [Installation](#installation)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Quick Installation (Recommended)](#quick-installation-recommended)
|
||||
- [Manual Configuration (Advanced)](#manual-configuration-advanced)
|
||||
- [Verification](#verification)
|
||||
- [Integration with SGLang](#integration-with-sglang)
|
||||
- [Installation Steps](#installation-steps)
|
||||
- [Complete Example: Qwen3-30B-A3B](#complete-example-qwen3-30b-a3b)
|
||||
- [KT-Kernel Parameters](#kt-kernel-parameters)
|
||||
- [Direct Python API Usage](#direct-python-api-usage)
|
||||
- [Advanced Options](#advanced-options)
|
||||
- [Build Configuration](#build-configuration)
|
||||
- [Manual Installation](#manual-installation)
|
||||
- [Error Troubleshooting](#error-troubleshooting)
|
||||
- [CUDA Not Found](#cuda-not-found)
|
||||
- [hwloc Not Found](#hwloc-not-found)
|
||||
- [Weight Quantization](#weight-quantization)
|
||||
- [CPU Weights (for "cold" experts on CPU)](#cpu-weights-for-cold-experts-on-cpu)
|
||||
- [GPU Weights (for "hot" experts on GPU)](#gpu-weights-for-hot-experts-on-gpu)
|
||||
- [Before Commit!](#before-commit)
|
||||
- [KT-Kernel](#kt-kernel)
|
||||
- [Note](#note)
|
||||
- [Features](#features)
|
||||
- [Installation](#installation)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Quick Installation (Recommended)](#quick-installation-recommended)
|
||||
- [Manual Configuration (Advanced)](#manual-configuration-advanced)
|
||||
- [Verification](#verification)
|
||||
- [Integration with SGLang](#integration-with-sglang)
|
||||
- [Installation Steps](#installation-steps)
|
||||
- [1. Install SGLang](#1-install-sglang)
|
||||
- [2. Prepare Weights](#2-prepare-weights)
|
||||
- [3. Launch SGLang Server](#3-launch-sglang-server)
|
||||
- [Complete Example: Qwen3-30B-A3B](#complete-example-qwen3-30b-a3b)
|
||||
- [Option A: AMX Backend (AMXINT8)](#option-a-amx-backend-amxint8)
|
||||
- [Option B: LLAMAFILE Backend (GGUF)](#option-b-llamafile-backend-gguf)
|
||||
- [KT-Kernel Parameters](#kt-kernel-parameters)
|
||||
- [Direct Python API Usage](#direct-python-api-usage)
|
||||
- [Advanced Options](#advanced-options)
|
||||
- [Build Configuration](#build-configuration)
|
||||
- [Manual Installation](#manual-installation)
|
||||
- [1. Install System Dependencies](#1-install-system-dependencies)
|
||||
- [2. Set Build Configuration](#2-set-build-configuration)
|
||||
- [3. Build and Install](#3-build-and-install)
|
||||
- [Error Troubleshooting](#error-troubleshooting)
|
||||
- [CUDA Not Found](#cuda-not-found)
|
||||
- [hwloc Not Found](#hwloc-not-found)
|
||||
- [Weight Quantization](#weight-quantization)
|
||||
- [Before Commit!](#before-commit)
|
||||
## Note
|
||||
|
||||
**Current Support Status:**
|
||||
- ✅ **Intel CPUs with AMX**: Fully supported
|
||||
- ⚠️ **Universal CPU with llamafile**: In preview, not yet fully complete
|
||||
- ⚠️ **AMD CPUs with BLIS**: Upcoming, not yet fully integrated
|
||||
- ✅ **Intel CPUs with AMX**: Fully supported (using weights converted to INT4/INT8 format)
|
||||
- ✅ **Universal CPU (llamafile backend)**: Supported (using GGUF-format weights)
|
||||
- ⚠️ **AMD CPUs with BLIS**: In progress, not yet fully integrated
|
||||
|
||||
## Features
|
||||
|
||||
- **AMX Optimization**: Intel AMX (Advanced Matrix Extensions) support for INT4/INT8 quantized MoE inference
|
||||
- **Multi-Backend**: Unified `KTMoEWrapper` API supporting multiple backends (AMXINT4, AMXINT8, LLAMAFILE*)
|
||||
- **Flexible Backends**: AVX512, AVX2 via pluggable backend architecture
|
||||
- **Efficient MoE**: Optimized Mixture-of-Experts operations with NUMA-aware memory management
|
||||
- **Async Execution**: Non-blocking `submit_forward` / `sync_forward` API for improved pipelining
|
||||
- **Easy Integration**: Clean Python API with automatic backend selection
|
||||
|
||||
**Note**: *LLAMAFILE backend support is currently in *preview* and not yet fully complete.
|
||||
- **CPU-Optimized MoE Kernels**: High-throughput MoE expert kernels optimized for instruction sets.
|
||||
- **AMX INT4/INT8 Backend**: INT4 / INT8 quantized expert inference backend for AMX-capable servers.
|
||||
- **Llamafile CPU Backend**: AVX2/AVX512-based MoE backend built on Llamafile for universal CPU deployment.
|
||||
- **NUMA-Aware Execution**: Thread pool and memory layout designed for multi-socket / multi-NUMA machines.
|
||||
|
||||
## Installation
|
||||
|
||||
|
|
@ -62,18 +65,18 @@ conda activate kt-kernel
|
|||
|
||||
You can now install in two clear steps using the same script.
|
||||
|
||||
Option A: Two-step (explicit)
|
||||
Option A: Two-step (specify dependencies installation and build separately)
|
||||
|
||||
```bash
|
||||
# 1) Install system prerequisites (cmake, hwloc, pkg-config)
|
||||
./install.sh deps
|
||||
|
||||
# 2) Build and install kt-kernel (auto-detects CPU)
|
||||
# By default, the script cleans the local ./build directory before compiling.
|
||||
# 2) Build and install kt-kernel (auto-detects CPU instruction set)
|
||||
# By default, the script cleans the local ./build directory before compiling
|
||||
./install.sh build
|
||||
```
|
||||
|
||||
Option B: One-step (deps + build)
|
||||
Option B: One-step
|
||||
|
||||
```bash
|
||||
./install.sh
|
||||
|
|
@ -88,7 +91,9 @@ The install script will:
|
|||
- AMX CPU detected → `NATIVE + AMX=ON`
|
||||
- No AMX detected → `NATIVE + AMX=OFF`
|
||||
|
||||
⚠️ **Important for LLAMAFILE backend users:** If you have an AMX-capable CPU and plan to use the LLAMAFILE backend, do NOT use auto-detection. Use manual mode with `AVX512` or `AVX2` instead of `NATIVE` to avoid compilation issues (see below).
|
||||
⚠️ **Important for LLAMAFILE backend users:**
|
||||
If you have an AMX-capable CPU but plan to use the LLAMAFILE backend, do NOT use the default auto-detection build.
|
||||
Use "manual mode" with `CPUINFER_CPU_INSTRUCT` set to `AVX512` or `AVX2` instead of `NATIVE` to avoid compilation issues (see below).
|
||||
|
||||
### Manual Configuration (Advanced)
|
||||
|
||||
|
|
@ -99,7 +104,7 @@ If you need specific build options (e.g., for LLAMAFILE backend, compatibility,
|
|||
export CPUINFER_CPU_INSTRUCT=AVX512 # Options: NATIVE, AVX512, AVX2, FANCY
|
||||
export CPUINFER_ENABLE_AMX=OFF # Options: ON, OFF
|
||||
|
||||
# Run with manual mode (build only)
|
||||
# Build only (skip auto-detection of instruction set)
|
||||
./install.sh build --manual
|
||||
```
|
||||
|
||||
|
|
@ -127,27 +132,35 @@ pip install -e "python[all]"
|
|||
|
||||
#### 2. Prepare Weights
|
||||
|
||||
You need both GPU weights and CPU weights for heterogeneous inference:
|
||||
You need both GPU weights and CPU-side expert weights for heterogeneous inference. The exact format depends on the backend:
|
||||
|
||||
**GPU Weights:** Use the original / quantized model weights.
|
||||
**GPU Weights (for all backends):**
|
||||
Use the model weights required by SGLang for GPU inference (for example, the original or already-quantized model directory from Hugging Face).
|
||||
|
||||
**CPU Weights:** Quantize to AMX-optimized format using the conversion script:
|
||||
**CPU Weights (AMX backend: `AMXINT4` / `AMXINT8`):**
|
||||
Quantize weights to AMX-optimized INT4/INT8 format using the provided script:
|
||||
|
||||
```bash
|
||||
python scripts/convert_cpu_weights.py \
|
||||
--input-path /path/to/model \
|
||||
--input-type bf16 \ # Depends on your GPU weights type: fp8, fp16, or bf16
|
||||
--input-type bf16 \
|
||||
--output /path/to/cpu-weights \
|
||||
--quant-method int8 # or int4
|
||||
```
|
||||
|
||||
- `--input-path`: Path to GPU-side original weights
|
||||
- `--input-type`: Depends on your GPU weights type (`fp8`, `fp16`, or `bf16`)
|
||||
|
||||
In SGLang integration, `--kt-weight-path` should point to this converted CPU weights directory.
|
||||
|
||||
**Supported input formats:** FP8, FP16, BF16 → INT4/INT8.
|
||||
|
||||
For more details, see:
|
||||
- [CPU Weights conversion](#cpu-weights-for-cold-experts-on-cpu)
|
||||
- [GPU Weights quantization](#gpu-weights-for-hot-experts-on-gpu)
|
||||
**CPU Weights (LLAMAFILE backend: `LLAMAFILE`):**
|
||||
LLAMAFILE uses pre-quantized **GGUF** weights on the CPU side directly, without running `convert_cpu_weights.py`. You need to:
|
||||
|
||||
**Note:** LLAMAFILE backend supports GGUF format directly, but this feature is still in preview.
|
||||
- Download a GGUF model directly from the web (e.g., GGUF repos on Hugging Face / Modelscope);
|
||||
- In SGLang integration, use that GGUF directory as `--kt-weight-path`.
|
||||
KT-Kernel supports multiple GGUF quantization formats such as `Q4_KM`, `Q4_K`, `Q5_K`, etc. Choose based on your latency and accuracy requirements.
|
||||
|
||||
#### 3. Launch SGLang Server
|
||||
|
||||
|
|
@ -177,14 +190,12 @@ See [KT-Kernel Parameters](#kt-kernel-parameters) section below for detailed par
|
|||
|
||||
### Complete Example: Qwen3-30B-A3B
|
||||
|
||||
This example demonstrates the full workflow from downloading weights to launching the server.
|
||||
This example demonstrates the full workflow from downloading weights to launching the server, showing both **AMX backend** and **LLAMAFILE backend** options.
|
||||
|
||||
**Hardware Configuration:**
|
||||
- **GPU**: NVIDIA RTX 4090 24GB
|
||||
- **CPU**: 2x Intel Xeon Gold 6454S (64 physical cores total, 128 threads, 2 NUMA nodes)
|
||||
- **Model**: [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B)
|
||||
- **GPU Weights**: BF16 original weights
|
||||
- **CPU Weights**: AMXINT8 quantized
|
||||
|
||||
**How to verify your system configuration:**
|
||||
```bash
|
||||
|
|
@ -202,19 +213,25 @@ NUMA node(s): 2
|
|||
- `--kt-cpuinfer 64`: Set to physical cores (64), not hyperthreads (128)
|
||||
- `--kt-threadpool-count 2`: 2 NUMA nodes detected (dual-socket system)
|
||||
- `--kt-num-gpu-experts 32`: With 24GB GPU memory, we can fit ~32 experts on GPU for this model (varies by model architecture and actual memory usage)
|
||||
- `--kt-max-deferred-experts-per-token 2`: Enable pipelined execution - allows CPU to process next batch while GPU completes current batch
|
||||
- `--kt-max-deferred-experts-per-token 2`: Enable pipelined execution; allows CPU to process next batch while GPU completes current batch
|
||||
|
||||
#### Step 1: Download model weights
|
||||
---
|
||||
|
||||
#### Option A: AMX Backend (AMXINT8)
|
||||
|
||||
For Intel CPUs with AMX instruction set support.
|
||||
|
||||
**Step 1: Download model weights**
|
||||
|
||||
```bash
|
||||
# Install huggingface-cli if not already installed
|
||||
pip install huggingface-hub
|
||||
|
||||
# Download model from Hugging Face
|
||||
hf download Qwen/Qwen3-30B-A3B --local-dir /mnt/data/models/Qwen3-30B-A3B
|
||||
huggingface-cli download Qwen/Qwen3-30B-A3B --local-dir /mnt/data/models/Qwen3-30B-A3B
|
||||
```
|
||||
|
||||
#### Step 2: Convert to CPU weights (AMXINT8)
|
||||
**Step 2: Convert to CPU weights (AMXINT8)**
|
||||
|
||||
```bash
|
||||
python scripts/convert_cpu_weights.py \
|
||||
|
|
@ -224,7 +241,7 @@ python scripts/convert_cpu_weights.py \
|
|||
--quant-method int8
|
||||
```
|
||||
|
||||
#### Step 3: Launch SGLang server
|
||||
**Step 3: Launch SGLang server**
|
||||
|
||||
```bash
|
||||
python -m sglang.launch_server \
|
||||
|
|
@ -244,23 +261,64 @@ python -m sglang.launch_server \
|
|||
--kt-max-deferred-experts-per-token 2
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Option B: LLAMAFILE Backend (GGUF)
|
||||
|
||||
For universal CPUs (no AMX required), using pre-quantized GGUF weights directly.
|
||||
|
||||
**Step 1: Download GPU weights (original model)**
|
||||
|
||||
```bash
|
||||
pip install huggingface-hub
|
||||
|
||||
huggingface-cli download Qwen/Qwen3-30B-A3B --local-dir /mnt/data/models/Qwen3-30B-A3B
|
||||
```
|
||||
|
||||
**Step 2: Download CPU weights (GGUF format)**
|
||||
|
||||
```bash
|
||||
huggingface-cli download Qwen/Qwen3-30B-A3B-GGUF Qwen3-30B-A3B-Q4_K_M.gguf \
|
||||
--local-dir /mnt/data/models/Qwen3-30B-A3B-Q4_K_M
|
||||
```
|
||||
|
||||
**Step 3: Launch SGLang server**
|
||||
|
||||
```bash
|
||||
python -m sglang.launch_server \
|
||||
--host 0.0.0.0 \
|
||||
--port 8000 \
|
||||
--model /mnt/data/models/Qwen3-30B-A3B \
|
||||
--trust-remote-code \
|
||||
--mem-fraction-static 0.92 \
|
||||
--chunked-prefill-size 4096 \
|
||||
--served-model-name Qwen3-30B-A3B \
|
||||
--enable-mixed-chunk \
|
||||
--kt-method LLAMAFILE \
|
||||
--kt-weight-path /mnt/data/models/Qwen3-30B-A3B-Q4_K_M \
|
||||
--kt-cpuinfer 64 \
|
||||
--kt-threadpool-count 2 \
|
||||
--kt-num-gpu-experts 32 \
|
||||
--kt-max-deferred-experts-per-token 2
|
||||
```
|
||||
|
||||
### KT-Kernel Parameters
|
||||
|
||||
| Parameter | Description | Example Value |
|
||||
|-----------|-------------|---------------|
|
||||
| `--kt-method` | CPU inference backend method | `AMXINT4`, `AMXINT8`, or `LLAMAFILE` (preview) |
|
||||
| `--kt-method` | CPU inference backend method | `AMXINT4`, `AMXINT8`, or `LLAMAFILE` |
|
||||
| `--kt-weight-path` | Path to quantized CPU weights | `/path/to/cpu-weights` |
|
||||
| `--kt-cpuinfer` | Number of CPU inference threads | `64` (adjust based on CPU cores) |
|
||||
| `--kt-threadpool-count` | Number of thread pools for parallel execution | `2` (typically 1-4) |
|
||||
| `--kt-num-gpu-experts` | Number of experts to keep on GPU | `32` (remaining experts go to CPU) |
|
||||
| `--kt-max-deferred-experts-per-token` | Number of experts per token to defer for pipelined execution | `2` (0 to disable, 1-2 recommended) |
|
||||
| `--kt-max-deferred-experts-per-token` | Number of experts per token to defer for pipelined execution | `2` (0 to disable, 1-4 recommended) |
|
||||
|
||||
**Parameter Guidelines:**
|
||||
|
||||
- **`kt-method`**: Choose based on your CPU and weight format:
|
||||
- `AMXINT4`: Best performance on AMX CPUs with INT4 quantized weights (May cause huge accuracy drop for some models, e.g., Qwen3-30B-A3B)
|
||||
- `AMXINT8`: Higher accuracy with INT8 quantized weights on AMX CPUs
|
||||
- `LLAMAFILE`: Preview support for GGUF format (not fully complete)
|
||||
- `LLAMAFILE`: GGUF-based backend
|
||||
|
||||
- **`kt-cpuinfer`**: Set to the number of **physical CPU cores** (not hyperthreads).
|
||||
- Check physical cores: `lscpu | grep -E "^CPU\(s\)|Thread\(s\) per core"`
|
||||
|
|
@ -282,8 +340,8 @@ python -m sglang.launch_server \
|
|||
|
||||
- **`kt-max-deferred-experts-per-token`**: Enables pipelined execution:
|
||||
- `0`: Synchronous execution (simpler, higher latency)
|
||||
- `1-2`: Deferred execution (better latency, requires tuning) - recommended
|
||||
- `3-4`: Higher deferred count (possible but rarely beneficial)
|
||||
- `1-4`: Deferred execution (recommended range; good latency/quality balance, requires tuning)
|
||||
- `5-7`: Highest latency reduction but may introduce noticeable accuracy loss; use with care
|
||||
|
||||
## Direct Python API Usage
|
||||
|
||||
|
|
@ -304,7 +362,7 @@ wrapper = KTMoEWrapper(
|
|||
threadpool_count=2,
|
||||
weight_path="/path/to/weights",
|
||||
chunked_prefill_size=512,
|
||||
method="AMXINT4" # Options: "AMXINT4", "AMXINT8", "LLAMAFILE" (preview)
|
||||
method="AMXINT4" # Options: "AMXINT4", "AMXINT8", "LLAMAFILE"
|
||||
)
|
||||
|
||||
# Load weights (from disk - pre-quantized)
|
||||
|
|
@ -442,11 +500,7 @@ sudo make install
|
|||
|
||||
## Weight Quantization
|
||||
|
||||
KT-Kernel provides weight quantization tools for CPU-GPU hybrid inference (e.g., integrating with SGLang). Both tools work together to enable heterogeneous expert placement across CPUs and GPUs.
|
||||
|
||||
### CPU Weights (for "cold" experts on CPU)
|
||||
|
||||
Quantize weights to INT4/INT8 format optimized for AMX inference:
|
||||
For AMX backends (`AMXINT4` / `AMXINT8`), CPU-side experts must be converted to AMX-friendly INT4/INT8 format using the provided script:
|
||||
|
||||
```bash
|
||||
python scripts/convert_cpu_weights.py \
|
||||
|
|
@ -458,40 +512,33 @@ python scripts/convert_cpu_weights.py \
|
|||
|
||||
**Supported formats:** FP8, FP16, BF16 → INT4/INT8
|
||||
|
||||
### GPU Weights (for "hot" experts on GPU)
|
||||
|
||||
Apply GPTQ quantization to model weights:
|
||||
|
||||
```bash
|
||||
# Install additional dependencies first
|
||||
pip install accelerate transformers llmcompressor datasets
|
||||
|
||||
# Quantize GPU weights
|
||||
python scripts/convert_gpu_weights.py \
|
||||
--model_id /path/to/model \
|
||||
--output_dir /path/to/output \
|
||||
--quant_type W4A16
|
||||
```
|
||||
|
||||
**Supported types:** W4A16 (GPTQ4), W8A16 (GPTQ8)
|
||||
For LLAMAFILE backend (`LLAMAFILE`), CPU-side experts are loaded directly from **GGUF** weights. You do **not** need to run the AMX conversion script; instead, download a GGUF model from the web (e.g., a GGUF repo on Hugging Face) and point `weight_path` / SGLang `--kt-weight-path` (or `--model` when appropriate) to that GGUF directory. KT-Kernel supports multiple GGUF quantization types such as `Q4_KM`, `Q4_K`, `Q5_K`, etc.
|
||||
|
||||
---
|
||||
|
||||
For detailed documentation, advanced options, and low-memory mode, see [scripts/README.md](scripts/README.md).
|
||||
|
||||
## Before Commit!
|
||||
your msg should match: Conventional Commits (https://www.conventionalcommits.org/) <br>and format your code before commit:
|
||||
|
||||
Commit messages should follow the Conventional Commits specification: https://www.conventionalcommits.org/
|
||||
|
||||
Please format your code before committing:
|
||||
|
||||
```shell
|
||||
cmake -B build
|
||||
cd build
|
||||
make format
|
||||
```
|
||||
and you may need a new clang-format at least 18, use this command in conda env:
|
||||
|
||||
You may need a newer clang-format (at least version 18). In a conda environment:
|
||||
|
||||
```shell
|
||||
conda install -c conda-forge clang-format=18
|
||||
rm -rf build
|
||||
```
|
||||
and you may need black for python format:
|
||||
|
||||
It's also recommended to install black for Python code formatting:
|
||||
|
||||
```shell
|
||||
conda install black
|
||||
```
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue