kvcache-ai-ktransformers/ktransformers/operators
2025-02-24 11:58:10 +00:00
..
__init__.py Initial commit 2024-07-27 16:06:58 +08:00
attention.py clean PR code and disable flashinfer 2025-02-19 04:42:47 +00:00
base_operator.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
cpuinfer.py [feature] release 0.1.3 2024-08-28 16:11:43 +00:00
dynamic_attention.py [feature] release 0.1.3 2024-08-28 16:11:43 +00:00
experts.py Add data loader to read special weights for fp8; Add special weight process script 2025-02-24 11:34:17 +00:00
flashinfer_wrapper.py optimize gguf dequant, save mem, support Q2_K 2025-02-22 06:13:01 +00:00
gate.py Add data loader to read special weights for fp8; Add special weight process script 2025-02-24 11:34:17 +00:00
linear.py Merge remote-tracking branch 'upstream/develop-0.2.2' into support-fp8 2025-02-24 11:58:10 +00:00
models.py done support deepseekv3 2025-02-04 15:53:38 +00:00
RoPE.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
triton_attention.py Update triton_attention.py 2025-02-15 15:41:01 +08:00