kvcache-ai-ktransformers/doc/en/Qwen3.5.md
Jianwei Dong 15c624dcae
Fix/sglang kt detection (#1875)
* [feat]: simplify sglang installation with submodule, auto-sync CI, and version alignment

- Add kvcache-ai/sglang as git submodule at third_party/sglang (branch = main)
- Add top-level install.sh for one-click source installation (sglang + kt-kernel)
- Add sglang-kt as hard dependency in kt-kernel/pyproject.toml
- Add CI workflow to auto-sync sglang submodule daily and create PR
- Add CI workflow to build and publish sglang-kt to PyPI
- Integrate sglang-kt build into release-pypi.yml (version.py bump publishes both packages)
- Align sglang-kt version with ktransformers via SGLANG_KT_VERSION env var injection
- Update Dockerfile to use submodule and inject aligned version
- Update all 13 doc files, CLI hints, and i18n strings to reference new install methods

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [build]: bump version to 0.5.2

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [build]: rename PyPI package from kt-kernel to ktransformers

Users can now `pip install ktransformers` to get everything
(sglang-kt is auto-installed as a dependency).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Revert "[build]: rename PyPI package from kt-kernel to ktransformers"

This reverts commit e0cbbf6364.

* [build]: add ktransformers meta-package for PyPI

`pip install ktransformers` now works as a single install command.
It pulls kt-kernel (which in turn pulls sglang-kt).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [fix]: show sglang-kt package version in kt version command

- Prioritize sglang-kt package version (aligned with ktransformers)
  over sglang internal __version__
- Update display name from "sglang" to "sglang-kt"

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [fix]: improve sglang-kt detection in kt doctor and kt version

Recognize sglang-kt package name as proof of kvcache-ai fork installation.
Previously both commands fell through to "PyPI (not recommended)" for
non-editable local source installs. Now version.py reuses the centralized
check_sglang_installation() logic.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [build]: bump version to 0.5.2.post1

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-04 16:54:48 +08:00

4.5 KiB

Running Qwen3.5 with SGLang and KT-Kernel

This tutorial demonstrates how to run Qwen3.5 (MoE-400B) model inference using SGLang integrated with KT-Kernel for CPU-GPU heterogeneous inference. This setup enables efficient deployment of large MoE models by offloading experts to CPU.

Table of Contents

Hardware Requirements

Minimum Configuration:

  • GPU: NVIDIA 4x RTX 4090 (or equivalent with at least 96GB total VRAM available)
  • CPU: x86 CPU with AVX512F support (e.g., Intel Sapphire Rapids)
  • RAM: At least 800GB system memory
  • Storage: ~800GB for model weights (BF16)

Prerequisites

Before starting, ensure you have:

  1. KT-Kernel installed:
git clone https://github.com/kvcache-ai/ktransformers.git
git checkout qwen3.5
git submodule update --init --recursive
cd kt-kernel && ./install.sh
  1. SGLang installed - Install the kvcache-ai fork of SGLang (one of):
# Option A: One-click install (from ktransformers root)
./install.sh

# Option B: pip install
pip install sglang-kt

Note: You may need to reinstall cudnn: pip install nvidia-cudnn-cu12==9.16.0.29

  1. CUDA toolkit - Compatible with your GPU (CUDA 12.8+ recommended)

  2. Hugging Face CLI - For downloading models:

    pip install huggingface-hub
    

Step 1: Download Model Weights

# Create a directory for models
mkdir -p /path/to/models
cd /path/to/models

# Download Qwen3.5 (BF16)
huggingface-cli download Qwen/Qwen3.5 \
  --local-dir /path/to/qwen3.5

Note: Replace /path/to/models with your actual storage path throughout this tutorial.

Step 2: Launch SGLang Server

Start the SGLang server with KT-Kernel integration for CPU-GPU heterogeneous inference.

Launch Command (4x RTX 4090 Example)

python -m sglang.launch_server \
  --host 0.0.0.0 \
  --port 30005 \
  --model /path/to/qwen3.5 \
  --kt-weight-path /path/to/qwen3.5 \
  --kt-cpuinfer 60 \
  --kt-threadpool-count 2 \
  --kt-num-gpu-experts 1 \
  --kt-method BF16 \
  --attention-backend triton \
  --trust-remote-code \
  --mem-fraction-static 0.98 \
  --chunked-prefill-size 4096 \
  --max-running-requests 32 \
  --max-total-tokens 32000 \
  --served-model-name qwen3.5 \
  --enable-mixed-chunk \
  --tensor-parallel-size 4 \
  --enable-p2p-check \
  --disable-shared-experts-fusion \
  --disable-custom-all-reduce

See KT-Kernel Parameters for detailed parameter tuning guidelines.

Step 3: Send Inference Requests

Once the server is running, you can send inference requests using the OpenAI-compatible API.

Basic Chat Completion Request

curl -s http://localhost:30005/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "qwen3.5",
    "stream": false,
    "messages": [
      {"role": "user", "content": "hi, who are you?"}
    ]
  }'

Example Response

{
    "id": "c79f6d63e04f4874acb8853d218e1bf1",
    "object": "chat.completion",
    "created": 1770880035,
    "model": "qwen3.5",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": "Hello! I'm **Qwen**, a large language model developed by **Alibaba Cloud**. I'm designed to provide helpful, accurate, and safe information across a wide range of topics—whether you have questions, need help with writing, coding, analysis, or just want to explore ideas together.\n\nHow can I assist *you* today?",
                "reasoning_content": null,
                "tool_calls": null
            },
            "logprobs": null,
            "finish_reason": "stop",
            "matched_stop": 248046
        }
    ],
    "usage": {
        "prompt_tokens": 16,
        "total_tokens": 527,
        "completion_tokens": 511,
        "prompt_tokens_details": null,
        "reasoning_tokens": 0
    },
    "metadata": {
        "weight_version": "default"
    }
}