kvcache-ai-ktransformers/ktransformers/operators
2025-07-22 10:58:25 +00:00
..
ascend support npu 2025-07-22 10:58:16 +00:00
__init__.py Initial commit 2024-07-27 16:06:58 +08:00
attention.py support npu 2025-07-22 10:58:25 +00:00
balance_serve_attention.py update torch MLA kernel 2025-05-14 09:45:12 +00:00
base_operator.py support safetensor load, delete architectures argument 2025-05-09 10:38:29 +00:00
cpuinfer.py cpuinfer: filter repeated backend instantiation 2025-03-10 22:03:04 +08:00
dynamic_attention.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00
experts.py fix local_chat.py chunk_size not effect experts 2025-05-23 02:35:01 +00:00
flashinfer_batch_prefill_wrapper.py support npu 2025-07-22 10:58:25 +00:00
flashinfer_wrapper.py support npu 2025-07-22 10:58:25 +00:00
gate.py Enable support for Intel XPU devices, add support for DeepSeek V2/V3 first 2025-05-14 19:37:27 +00:00
layernorm.py add XPU support for qwen3moe local chat 2025-05-22 21:01:41 +08:00
linear.py support npu 2025-07-22 10:58:25 +00:00
mlp.py support safetensor load, delete architectures argument 2025-05-09 10:38:29 +00:00
models.py add XPU support for qwen3moe local chat 2025-05-22 21:01:41 +08:00
RoPE.py support safetensor load, delete architectures argument 2025-05-09 10:38:29 +00:00
triton_attention.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00
triton_attention_prefill.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00