kvcache-ai-ktransformers/doc/en/kt-kernel/GLM-5.1-Tutorial.md
Jianwei Dong 891c5c0a13
Some checks failed
Book-CI / test (push) Has been cancelled
Book-CI / test-1 (push) Has been cancelled
Book-CI / test-2 (push) Has been cancelled
Deploy / deploy (macos-latest) (push) Has been cancelled
Deploy / deploy (ubuntu-latest) (push) Has been cancelled
Deploy / deploy (windows-latest) (push) Has been cancelled
Support glm5.1 (#1916)
2026-04-07 11:29:32 +08:00

6.6 KiB

Running GLM-5.1 with SGLang and KT-Kernel

This tutorial demonstrates how to run GLM-5.1 model inference using SGLang integrated with Ktransformers for CPU-GPU heterogeneous inference. This setup enables efficient deployment of large MoE models by offloading experts to CPU. KT-Kernel supports both BF16 and FP8 precision backends, allowing you to choose between maximum quality and reduced memory footprint.

GLM-5.1 introduces thinking mode (enabled by default), interleaved and preserved thinking, and MTP (Multi-Token Prediction) weights for both precisions.

Table of Contents

Prerequisites

Before starting, ensure you have:

  1. SGLang installed

    Install the kvcache-ai fork of SGLang (one of):

    # Option A: One-click install (from ktransformers root)
    ./install.sh
    
    # Option B: pip install
    pip install sglang-kt
    
  2. KT-Kernel installed

    git clone https://github.com/kvcache-ai/ktransformers.git
    git submodule update --init --recursive
    cd kt-kernel && ./install.sh
    
  3. CUDA toolkit - CUDA 12.0+ recommended (12.8+ for best FP8 support)

  4. Hugging Face CLI - For downloading models:

    pip install -U huggingface-hub
    

Step 1: Download Model Weights

Download the GLM-5.1 weights from Hugging Face. Both BF16 and FP8 models include MTP weights.

# FP8
hf download zai-org/GLM-5.1-FP8 \
  --local-dir /path/to/GLM-5.1-FP8

# BF16
hf download zai-org/GLM-5.1 \
  --local-dir /path/to/GLM-5.1

Note: Replace /path/to/ with your actual storage path throughout this tutorial.

Step 2: Launch SGLang Server

Start the SGLang server with KT-Kernel integration for CPU-GPU heterogeneous inference.

# FP8 Precision
export PYTORCH_ALLOC_CONF=expandable_segments:True
export SGLANG_ENABLE_JIT_DEEPGEMM=0

python -m sglang.launch_server \
  --host 0.0.0.0 \
  --port 30000 \
  --model /path/to/GLM-5.1-FP8 \
  --kt-weight-path /path/to/GLM-5.1-FP8 \
  --kt-cpuinfer 96 \
  --kt-threadpool-count 2 \
  --kt-num-gpu-experts 30 \
  --kt-method FP8 \
  --kt-gpu-prefill-token-threshold 1024 \
  --kt-enable-dynamic-expert-update \
  --kt-expert-placement-strategy uniform \
  --trust-remote-code \
  --mem-fraction-static 0.75 \
  --served-model-name GLM5.1 \
  --enable-mixed-chunk \
  --tensor-parallel-size 8 \
  --enable-p2p-check \
  --disable-shared-experts-fusion \
  --chunked-prefill-size 16384 \
  --max-running-requests 4 \
  --max-total-tokens 128000 \
  --attention-backend flashinfer \
  --fp8-gemm-backend cutlass \
  --kv-cache-dtype bf16 \
  --tool-call-parser glm47 \
  --reasoning-parser glm45 \
  --watchdog-timeout 3000

# BF16 Precision
export PYTORCH_ALLOC_CONF=expandable_segments:True
export SGLANG_ENABLE_JIT_DEEPGEMM=0

python -m sglang.launch_server \
  --host 0.0.0.0 \
  --port 30000 \
  --model /path/to/GLM-5.1 \
  --kt-weight-path /path/to/GLM-5.1 \
  --kt-cpuinfer 96 \
  --kt-threadpool-count 2 \
  --kt-num-gpu-experts 10 \
  --kt-method BF16 \
  --kt-gpu-prefill-token-threshold 1024 \
  --kt-enable-dynamic-expert-update \
  --kt-expert-placement-strategy uniform \
  --trust-remote-code \
  --mem-fraction-static 0.75 \
  --served-model-name GLM5.1 \
  --enable-mixed-chunk \
  --tensor-parallel-size 8 \
  --enable-p2p-check \
  --disable-shared-experts-fusion \
  --chunked-prefill-size 16384 \
  --max-running-requests 4 \
  --max-total-tokens 128000 \
  --attention-backend flashinfer \
  --tool-call-parser glm47 \
  --reasoning-parser glm45 \
  --watchdog-timeout 3000

Layerwise prefill requires one extra MoE layer's worth of VRAM.

If you encounter OOM, adjust --kt-num-gpu-experts, --chunked-prefill-size, --mem-fraction-static and --max-total-tokens when launching the server.

If you encounter other issues, try kt doctor to diagnose your setup.

See KT-Kernel Parameters for detailed parameter tuning guidelines.

Step 3: Send Inference Requests

Once the server is running (default: http://localhost:30000), you can interact with the model in several ways:

Option A: Interactive Chat with KT CLI

The easiest way to chat with the model:

kt chat

This opens an interactive terminal chat session. Type your messages and press Enter to send. Use Ctrl+C to exit.

Option B: OpenAI-Compatible API

The server exposes an OpenAI-compatible API at http://localhost:30000/v1.

curl example (streaming):

curl http://localhost:30000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "GLM5.1",
    "messages": [{"role": "user", "content": "hi, who are you?"}],
    "stream": true
  }'

Thinking Mode

GLM-5.1 has thinking mode enabled by default. It supports two reasoning modes:

  • Interleaved Thinking - Recommended for general conversation scenarios
  • Interleaved + Preserved Thinking - Recommended for agentic workflows, especially code agents (e.g., Claude Code, Roo Code, Kilo Code)

To enable interleaved + preserved thinking with SGLang, pass the following parameters in your API request:

"chat_template_kwargs": {
    "enable_thinking": true,
    "clear_thinking": false
}

To disable thinking mode:

"chat_template_kwargs": {
    "enable_thinking": false
}

Default settings (suitable for most tasks):

  • temperature: 1.0
  • top-p: 0.95
  • max new tokens: 131072

Terminal Bench:

  • temperature: 0.7
  • top-p: 1.0
  • max new tokens: 16384
  • context length: 202752

Tau2-Bench:

  • temperature: 0
  • max new tokens: 16384

For multi-turn agentic tasks (e.g., Tau2-Bench and Terminal Bench 2), enable preserved thinking mode.

Additional Resources