Revise GPU/CPU memory footprint information

Updated memory footprint details for DeepSeek models.
This commit is contained in:
Peilin Li 2025-11-05 12:11:19 +08:00 committed by GitHub
parent 501b114863
commit 6721f8765d
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -284,11 +284,11 @@ chunk_size: 8192
### GPU/CPU Memory Footprint
- DeepSeek-V3 (671B; 61 layers with 58 MoE): ~**70 GB** total GPU memory (multi-GPU), ~**1.21.3 TB** host memory.
- DeepSeek-V2-Lite (14B; 27 layers with 26 MoE): ~**5.5 GB** GPU memory, ~**150 GB** host memory.
- DeepSeek-V3 (671B; 61 layers with 58 MoE): ~**70 GB** total GPU VRAM (multi-GPU), ~**1.21.3 TB** CPU RAM.
- DeepSeek-V2-Lite (14B; 27 layers with 26 MoE): ~**5.5 GB** GPU VRAM, ~**30 GB** CPU RAM.
## Conclusion
By integrating **KTransformers LoRA fine-tuning** into **LLaMA-Factory**, we provide a practical guide for efficient training and deployment of MoE LLMs. KT brings cutting-edge optimizations (DeepSeek/Qwen/Kimi support with AMX-accelerated kernels), and LoRA enables customization under very low GPU memory. LLaMA-Factory offers a friendly, unified interface.
This integration (akin to Unsloth-style speedups) means even models with tens to hundreds of billions of parameters can be fine-tuned and deployed with low latency on commodity hardware. You get **memory savings, speed-ups, and usability** together. We encourage you to try LLaMA-Factory + KT for your next MoE project and follow this guide. Feedback is welcome!
This integration (akin to Unsloth-style speedups) means even models with tens to hundreds of billions of parameters can be fine-tuned and deployed with low latency on commodity hardware. You get **memory savings, speed-ups, and usability** together. We encourage you to try LLaMA-Factory + KT for your next MoE project and follow this guide. Feedback is welcome!