kvcache-ai-ktransformers/ktransformers/operators
2025-02-18 11:40:15 +08:00
..
__init__.py Initial commit 2024-07-27 16:06:58 +08:00
attention.py fix: adapt prefix cache in forward_linux_flashinfer 2025-02-18 11:40:15 +08:00
base_operator.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
cpuinfer.py [feature] release 0.1.3 2024-08-28 16:11:43 +00:00
dynamic_attention.py [feature] release 0.1.3 2024-08-28 16:11:43 +00:00
experts.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
flashinfer_wrapper.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
gate.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
linear.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
models.py done support deepseekv3 2025-02-04 15:53:38 +00:00
RoPE.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
triton_attention.py Update triton_attention.py 2025-02-15 15:41:01 +08:00