mirror of
https://github.com/kvcache-ai/ktransformers.git
synced 2026-04-28 20:00:06 +00:00
* [feat]: simplify sglang installation with submodule, auto-sync CI, and version alignment
- Add kvcache-ai/sglang as git submodule at third_party/sglang (branch = main)
- Add top-level install.sh for one-click source installation (sglang + kt-kernel)
- Add sglang-kt as hard dependency in kt-kernel/pyproject.toml
- Add CI workflow to auto-sync sglang submodule daily and create PR
- Add CI workflow to build and publish sglang-kt to PyPI
- Integrate sglang-kt build into release-pypi.yml (version.py bump publishes both packages)
- Align sglang-kt version with ktransformers via SGLANG_KT_VERSION env var injection
- Update Dockerfile to use submodule and inject aligned version
- Update all 13 doc files, CLI hints, and i18n strings to reference new install methods
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* [build]: bump version to 0.5.2
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* [build]: rename PyPI package from kt-kernel to ktransformers
Users can now `pip install ktransformers` to get everything
(sglang-kt is auto-installed as a dependency).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Revert "[build]: rename PyPI package from kt-kernel to ktransformers"
This reverts commit e0cbbf6364.
* [build]: add ktransformers meta-package for PyPI
`pip install ktransformers` now works as a single install command.
It pulls kt-kernel (which in turn pulls sglang-kt).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* [fix]: show sglang-kt package version in kt version command
- Prioritize sglang-kt package version (aligned with ktransformers)
over sglang internal __version__
- Update display name from "sglang" to "sglang-kt"
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* [fix]: improve sglang-kt detection in kt doctor and kt version
Recognize sglang-kt package name as proof of kvcache-ai fork installation.
Previously both commands fell through to "PyPI (not recommended)" for
non-editable local source installs. Now version.py reuses the centralized
check_sglang_installation() logic.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* [build]: bump version to 0.5.2.post1
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
181 lines
5.3 KiB
Markdown
181 lines
5.3 KiB
Markdown
# Running GLM-5 with SGLang and KT-Kernel
|
|
|
|
This tutorial demonstrates how to run GLM-5 model inference using SGLang integrated with KT-Kernel for CPU-GPU heterogeneous inference. This setup enables efficient deployment of large MoE models by offloading experts to CPU. KT-Kernel supports both BF16 and FP8 precision backends, allowing you to choose between maximum quality and reduced memory footprint.
|
|
|
|
## Table of Contents
|
|
|
|
- [Table of Contents](#table-of-contents)
|
|
- [Prerequisites](#prerequisites)
|
|
- [Step 1: Download Model Weights](#step-1-download-model-weights)
|
|
- [Step 2: Launch SGLang Server](#step-2-launch-sglang-server)
|
|
- [Step 3: Send Inference Requests](#step-3-send-inference-requests)
|
|
- [Option A: Interactive Chat with KT CLI](#option-a-interactive-chat-with-kt-cli)
|
|
- [Option B: OpenAI-Compatible API](#option-b-openai-compatible-api)
|
|
- [Additional Resources](#additional-resources)
|
|
|
|
## Prerequisites
|
|
|
|
Before starting, ensure you have:
|
|
|
|
1. **SGLang installed**
|
|
|
|
Install the kvcache-ai fork of SGLang (one of):
|
|
|
|
```bash
|
|
# Option A: One-click install (from ktransformers root)
|
|
./install.sh
|
|
|
|
# Option B: pip install
|
|
pip install sglang-kt
|
|
```
|
|
|
|
2. **KT-Kernel installed**
|
|
|
|
```bash
|
|
git clone https://github.com/kvcache-ai/ktransformers.git
|
|
git submodule update --init --recursive
|
|
cd kt-kernel && ./install.sh
|
|
```
|
|
|
|
3. **transformers reinstalled**
|
|
|
|
```bash
|
|
pip install git+https://github.com/huggingface/transformers.git
|
|
```
|
|
|
|
4. **CUDA toolkit** - CUDA 12.0+ recommended (12.8+ for best FP8 support)
|
|
5. **Hugging Face CLI** - For downloading models:
|
|
```bash
|
|
pip install -U huggingface-hub
|
|
```
|
|
|
|
## Step 1: Download Model Weights
|
|
|
|
Download the GLM-5 weights from Hugging Face.
|
|
|
|
```bash
|
|
# FP8
|
|
hf download zai-org/GLM-5-FP8 \
|
|
--local-dir /path/to/GLM-5-FP8
|
|
|
|
# BF16
|
|
hf download zai-org/GLM-5 \
|
|
--local-dir /path/to/GLM-5
|
|
```
|
|
|
|
**Note:** Replace `/path/to/` with your actual storage path throughout this tutorial.
|
|
|
|
## Step 2: Launch SGLang Server
|
|
|
|
Start the SGLang server with KT-Kernel integration for CPU-GPU heterogeneous inference.
|
|
|
|
```bash
|
|
# FP8 Precision
|
|
export PYTORCH_ALLOC_CONF=expandable_segments:True
|
|
export SGLANG_ENABLE_JIT_DEEPGEMM=0
|
|
|
|
python -m sglang.launch_server \
|
|
--host 0.0.0.0 \
|
|
--port 30000 \
|
|
--model /path/to/GLM-5-FP8 \
|
|
--kt-weight-path /path/to/GLM-5-FP8 \
|
|
--kt-cpuinfer 96 \
|
|
--kt-threadpool-count 2 \
|
|
--kt-num-gpu-experts 30 \
|
|
--kt-method FP8 \
|
|
--kt-gpu-prefill-token-threshold 1024 \
|
|
--kt-enable-dynamic-expert-update \
|
|
--kt-expert-placement-strategy uniform \
|
|
--trust-remote-code \
|
|
--mem-fraction-static 0.75 \
|
|
--served-model-name GLM5 \
|
|
--enable-mixed-chunk \
|
|
--tensor-parallel-size 8 \
|
|
--enable-p2p-check \
|
|
--disable-shared-experts-fusion \
|
|
--chunked-prefill-size 16384 \
|
|
--max-running-requests 4 \
|
|
--max-total-tokens 128000 \
|
|
--attention-backend flashinfer \
|
|
--fp8-gemm-backend cutlass \
|
|
--kv-cache-dtype bf16 \
|
|
--tool-call-parser glm47 \
|
|
--reasoning-parser glm45 \
|
|
--watchdog-timeout 3000
|
|
|
|
# BF16 Precision
|
|
export PYTORCH_ALLOC_CONF=expandable_segments:True
|
|
export SGLANG_ENABLE_JIT_DEEPGEMM=0
|
|
|
|
python -m sglang.launch_server \
|
|
--host 0.0.0.0 \
|
|
--port 30000 \
|
|
--model /path/to/GLM-5 \
|
|
--kt-weight-path /path/to/GLM-5 \
|
|
--kt-cpuinfer 96 \
|
|
--kt-threadpool-count 2 \
|
|
--kt-num-gpu-experts 10 \
|
|
--kt-method BF16 \
|
|
--kt-gpu-prefill-token-threshold 1024 \
|
|
--kt-enable-dynamic-expert-update \
|
|
--kt-expert-placement-strategy uniform \
|
|
--trust-remote-code \
|
|
--mem-fraction-static 0.75 \
|
|
--served-model-name GLM5 \
|
|
--enable-mixed-chunk \
|
|
--tensor-parallel-size 8 \
|
|
--enable-p2p-check \
|
|
--disable-shared-experts-fusion \
|
|
--chunked-prefill-size 16384 \
|
|
--max-running-requests 4 \
|
|
--max-total-tokens 128000 \
|
|
--attention-backend flashinfer \
|
|
--tool-call-parser glm47 \
|
|
--reasoning-parser glm45 \
|
|
--watchdog-timeout 3000
|
|
```
|
|
|
|
Layerwise prefill requires one extra MoE layer's worth of VRAM.
|
|
|
|
If you encounter OOM, adjust `--kt-num-gpu-experts`, `--chunked-prefill-size`, `--mem-fraction-static` and `--max-total-tokens` when launching the server.
|
|
|
|
If you encounter other issues, try `kt doctor` to diagnose your setup.
|
|
|
|
See [KT-Kernel Parameters](https://github.com/kvcache-ai/ktransformers/tree/main/kt-kernel#kt-kernel-parameters) for detailed parameter tuning guidelines.
|
|
|
|
## Step 3: Send Inference Requests
|
|
|
|
Once the server is running (default: `http://localhost:30000`), you can interact with the model in several ways:
|
|
|
|
### Option A: Interactive Chat with KT CLI
|
|
|
|
The easiest way to chat with the model:
|
|
|
|
```bash
|
|
kt chat
|
|
```
|
|
|
|
This opens an interactive terminal chat session. Type your messages and press Enter to send. Use `Ctrl+C` to exit.
|
|
|
|
### Option B: OpenAI-Compatible API
|
|
|
|
The server exposes an OpenAI-compatible API at `http://localhost:30000/v1`.
|
|
|
|
**curl example (streaming):**
|
|
|
|
```bash
|
|
curl http://localhost:30000/v1/chat/completions \
|
|
-H "Content-Type: application/json" \
|
|
-d '{
|
|
"model": "GLM5",
|
|
"messages": [{"role": "user", "content": "hi, who are you?"}],
|
|
"stream": true
|
|
}'
|
|
```
|
|
|
|
## Additional Resources
|
|
|
|
- [GLM-5 Model Card](https://huggingface.co/zai-org/GLM-5)
|
|
- [KT-Kernel Documentation](../../../kt-kernel/README.md)
|
|
- [SGLang GitHub](https://github.com/sgl-project/sglang)
|
|
- [KT-Kernel Parameters Reference](../../../kt-kernel/README.md#kt-kernel-parameters)
|