mirror of
https://github.com/kvcache-ai/ktransformers.git
synced 2026-04-29 20:29:48 +00:00
[docs]: refresh KT install commands (#1958)
Some checks are pending
Book-CI / test (push) Waiting to run
Book-CI / test-1 (push) Waiting to run
Book-CI / test-2 (push) Waiting to run
Deploy / deploy (macos-latest) (push) Waiting to run
Deploy / deploy (ubuntu-latest) (push) Waiting to run
Deploy / deploy (windows-latest) (push) Waiting to run
Some checks are pending
Book-CI / test (push) Waiting to run
Book-CI / test-1 (push) Waiting to run
Book-CI / test-2 (push) Waiting to run
Deploy / deploy (macos-latest) (push) Waiting to run
Deploy / deploy (ubuntu-latest) (push) Waiting to run
Deploy / deploy (windows-latest) (push) Waiting to run
This commit is contained in:
parent
07e274467a
commit
0656e01ac1
8 changed files with 37 additions and 31 deletions
|
|
@ -30,7 +30,7 @@ This tutorial demonstrates how to run DeepSeek V3.2 model inference using SGLang
|
|||
Before starting, ensure you have:
|
||||
|
||||
1. **KT-Kernel installed** - Follow the [installation guide](./kt-kernel_intro.md#installation)
|
||||
2. **SGLang installed** - Install the kvcache-ai fork: `pip install sglang-kt` or run `./install.sh` from the ktransformers root
|
||||
2. **SGLang installed** - Install the kvcache-ai fork: `pip install kt-kernel sglang-kt` or run `./install.sh` from the ktransformers root
|
||||
3. **CUDA toolkit** - Compatible with your GPU (CUDA 11.8+ recommended)
|
||||
4. **Hugging Face CLI** - For downloading models:
|
||||
```bash
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue