kvcache-ai-ktransformers/ktransformers/optimize/optimize_rules
Atream 5ec33d046d optimize gguf dequant, save mem, support Q2_K
use marlin for lm_head, lm_head only calc last token for prefill
extend context window to 19K for DeepSeek-V3/R1 within 24GB VRAM
2025-02-22 06:13:01 +00:00
..
DeepSeek-V2-Chat-multi-gpu-4.yaml optimize gguf dequant, save mem, support Q2_K 2025-02-22 06:13:01 +00:00
DeepSeek-V2-Chat-multi-gpu.yaml optimize gguf dequant, save mem, support Q2_K 2025-02-22 06:13:01 +00:00
DeepSeek-V2-Chat.yaml optimize gguf dequant, save mem, support Q2_K 2025-02-22 06:13:01 +00:00
DeepSeek-V2-Lite-Chat-multi-gpu.yaml optimize gguf dequant, save mem, support Q2_K 2025-02-22 06:13:01 +00:00
DeepSeek-V2-Lite-Chat.yaml optimize gguf dequant, save mem, support Q2_K 2025-02-22 06:13:01 +00:00
DeepSeek-V3-Chat-multi-gpu-4.yaml optimize gguf dequant, save mem, support Q2_K 2025-02-22 06:13:01 +00:00
DeepSeek-V3-Chat-multi-gpu-8.yaml optimize gguf dequant, save mem, support Q2_K 2025-02-22 06:13:01 +00:00
DeepSeek-V3-Chat-multi-gpu-marlin.yaml optimize gguf dequant, save mem, support Q2_K 2025-02-22 06:13:01 +00:00
DeepSeek-V3-Chat-multi-gpu.yaml optimize gguf dequant, save mem, support Q2_K 2025-02-22 06:13:01 +00:00
DeepSeek-V3-Chat.yaml optimize GPU 2025-02-21 05:06:57 +00:00
Internlm2_5-7b-Chat-1m.yaml [feature] release 0.1.3 2024-08-28 16:11:43 +00:00
Mixtral.yaml optimize gguf dequant, save mem, support Q2_K 2025-02-22 06:13:01 +00:00
Qwen2-57B-A14B-Instruct-multi-gpu.yaml optimize gguf dequant, save mem, support Q2_K 2025-02-22 06:13:01 +00:00
Qwen2-57B-A14B-Instruct.yaml optimize gguf dequant, save mem, support Q2_K 2025-02-22 06:13:01 +00:00