kvcache-ai-ktransformers/kt-kernel/python/utils
Benjamin F 8484ef8b16
[feat](kt-kernel): adapt MXFP4 MoE backend for DeepSeek-V4-Flash (#1950)
V4-Flash routed experts ship as native MXFP4 (E2M1 nibble + ue8m0 group
scale). Expose AMXFP4_KGroup_MOE through NativeMoEWrapper, add a loader
that handles V4's `layers.{L}.ffn.experts.{i}.{w1,w3,w2}.{weight,scale}`
naming and converts ue8m0 → bf16 via a lossless bit-cast, register the
model entry, and ship an end-to-end numerical validation script.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 18:11:53 +08:00
..
__init__.py Kt minimax (#1742) 2025-12-24 15:39:44 +08:00
amx.py [feat](kt-kernel): adapt MXFP4 MoE backend for DeepSeek-V4-Flash (#1950) 2026-04-25 18:11:53 +08:00
llamafile.py [fix]: fix --numa-nodes handling (#1904) 2026-03-31 17:50:22 +08:00
loader.py [feat](kt-kernel): adapt MXFP4 MoE backend for DeepSeek-V4-Flash (#1950) 2026-04-25 18:11:53 +08:00
moe_kernel.py [fix]: fix --numa-nodes handling (#1904) 2026-03-31 17:50:22 +08:00