kvcache-ai-ktransformers/ktransformers/operators
2025-02-22 13:05:08 +00:00
..
__init__.py Initial commit 2024-07-27 16:06:58 +08:00
attention.py clean PR code and disable flashinfer 2025-02-19 04:42:47 +00:00
base_operator.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
cpuinfer.py [feature] release 0.1.3 2024-08-28 16:11:43 +00:00
dynamic_attention.py [feature] release 0.1.3 2024-08-28 16:11:43 +00:00
experts.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
flashinfer_wrapper.py clean PR code and disable flashinfer 2025-02-19 04:42:47 +00:00
gate.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
linear.py Add fp8 linear kernel;\n Add empty cache to fit in 16G VRAM; By 'wkGCaSS - 知乎 https://zhuanlan.zhihu.com/p/25491611225' 2025-02-22 13:05:08 +00:00
models.py done support deepseekv3 2025-02-04 15:53:38 +00:00
RoPE.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
triton_attention.py Update triton_attention.py 2025-02-15 15:41:01 +08:00