mirror of
https://github.com/kvcache-ai/ktransformers.git
synced 2025-09-05 03:59:54 +00:00
use marlin for lm_head, lm_head only calc last token for prefill extend context window to 19K for DeepSeek-V3/R1 within 24GB VRAM |
||
---|---|---|
.. | ||
DeepSeek-V2-Chat-multi-gpu-4.yaml | ||
DeepSeek-V2-Chat-multi-gpu.yaml | ||
DeepSeek-V2-Chat.yaml | ||
DeepSeek-V2-Lite-Chat-multi-gpu.yaml | ||
DeepSeek-V2-Lite-Chat.yaml | ||
DeepSeek-V3-Chat-multi-gpu-4.yaml | ||
DeepSeek-V3-Chat-multi-gpu-8.yaml | ||
DeepSeek-V3-Chat-multi-gpu-marlin.yaml | ||
DeepSeek-V3-Chat-multi-gpu.yaml | ||
DeepSeek-V3-Chat.yaml | ||
Internlm2_5-7b-Chat-1m.yaml | ||
Mixtral.yaml | ||
Qwen2-57B-A14B-Instruct-multi-gpu.yaml | ||
Qwen2-57B-A14B-Instruct.yaml |