kvcache-ai-ktransformers/ktransformers/operators
2025-04-28 08:44:47 +00:00
..
__init__.py Initial commit 2024-07-27 16:06:58 +08:00
attention.py support qwen3, dont speak human language 2025-04-28 08:44:47 +00:00
balance_serve_attention.py support qwen3, dont speak human language 2025-04-28 08:44:47 +00:00
base_operator.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
cpuinfer.py cpuinfer: filter repeated backend instantiation 2025-03-10 22:03:04 +08:00
dynamic_attention.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00
experts.py support qwen3, dont speak human language 2025-04-28 08:44:47 +00:00
flashinfer_batch_prefill_wrapper.py support qwen3, dont speak human language 2025-04-28 08:44:47 +00:00
flashinfer_wrapper.py Fix ktransformers-server flashinfer wrapper position arg issue; 2025-04-01 07:30:23 +00:00
gate.py support qwen3, dont speak human language 2025-04-28 08:44:47 +00:00
layernorm.py support qwen3, dont speak human language 2025-04-28 08:44:47 +00:00
linear.py fix some bugs 2025-04-17 00:48:09 +08:00
mlp.py support qwen3, dont speak human language 2025-04-28 08:44:47 +00:00
models.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00
RoPE.py support qwen3, dont speak human language 2025-04-28 08:44:47 +00:00
triton_attention.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00
triton_attention_prefill.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00