kvcache-ai-ktransformers/ktransformers/operators
2025-03-14 14:25:52 -04:00
..
__init__.py Initial commit 2024-07-27 16:06:58 +08:00
attention.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00
base_operator.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
cpuinfer.py cpuinfer: filter repeated backend instantiation 2025-03-10 22:03:04 +08:00
dynamic_attention.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00
experts.py fix-singleton 2025-03-14 04:16:53 +00:00
flashinfer_wrapper.py fix flashinfer precision 2025-03-07 14:07:00 +00:00
gate.py Add data loader to read special weights for fp8; Add special weight process script 2025-02-24 11:34:17 +00:00
linear.py Update readme; Format code; Add example yaml. 2025-03-14 14:25:52 -04:00
models.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00
RoPE.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
triton_attention.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00
triton_attention_prefill.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00