mirror of
https://github.com/kvcache-ai/ktransformers.git
synced 2025-09-05 20:19:51 +00:00
use marlin for lm_head, lm_head only calc last token for prefill extend context window to 19K for DeepSeek-V3/R1 within 24GB VRAM |
||
---|---|---|
.. | ||
custom_marlin/quantize/utils | ||
kvcache | ||
llamafile |