mirror of
https://github.com/kvcache-ai/ktransformers.git
synced 2025-09-05 20:19:51 +00:00
use marlin for lm_head, lm_head only calc last token for prefill extend context window to 19K for DeepSeek-V3/R1 within 24GB VRAM |
||
---|---|---|
.. | ||
__init__.py | ||
configuration_deepseek.py | ||
configuration_deepseek_v3.py | ||
configuration_llama.py | ||
custom_cache.py | ||
modeling_deepseek.py | ||
modeling_deepseek_v3.py | ||
modeling_llama.py | ||
modeling_mixtral.py | ||
modeling_qwen2_moe.py |