diff --git a/README.md b/README.md index 81979ce..a5d7597 100644 --- a/README.md +++ b/README.md @@ -140,7 +140,7 @@ Some preparation: pip install ktransformers --no-build-isolation ``` - for windows we prepare a pre compiled whl package in [ktransformers-0.1.1+cu125torch24avx2-cp311-cp311-win_amd64.whl](https://github.com/kvcache-ai/ktransformers/releases/download/v0.1.1/ktransformers-0.1.1+cu125torch24avx2-cp311-cp311-win_amd64.whl), which require cuda-12.5, torch-2.4, python-3.11, more pre compiled package are being produced. + for windows we prepare a pre compiled whl package in [ktransformers-0.1.1+cu125torch24avx2-cp311-cp311-win_amd64.whl](https://github.com/kvcache-ai/ktransformers/releases/tag/v0.2.0), which require cuda-12.5, torch-2.4, python-3.11, more pre compiled package are being produced. 3. Or you can download source code and compile: @@ -213,6 +213,8 @@ It features the following arguments: | Model Name | Model Size | VRAM | Minimum DRAM | Recommended DRAM | | ------------------------------ | ---------- | ----- | --------------- | ----------------- | +| DeepSeek-R1-q4_k_m | 377G | 14G | 382G | 512G | +| DeepSeek-V3-q4_k_m | 377G | 14G | 382G | 512G | | DeepSeek-V2-q4_k_m | 133G | 11G | 136G | 192G | | DeepSeek-V2.5-q4_k_m | 133G | 11G | 136G | 192G | | DeepSeek-V2.5-IQ4_XS | 117G | 10G | 107G | 128G |