kvcache-ai-ktransformers/ktransformers/operators
2025-03-01 11:28:25 +00:00
..
__init__.py Initial commit 2024-07-27 16:06:58 +08:00
attention.py support chunk prefill, support 139K context for 24G VRAM 2025-03-01 11:28:25 +00:00
base_operator.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
cpuinfer.py [feature] release 0.1.3 2024-08-28 16:11:43 +00:00
dynamic_attention.py [feature] release 0.1.3 2024-08-28 16:11:43 +00:00
experts.py fix experts torch 2025-02-26 15:04:40 +08:00
flashinfer_wrapper.py support chunk prefill, support 139K context for 24G VRAM 2025-03-01 11:28:25 +00:00
gate.py Add data loader to read special weights for fp8; Add special weight process script 2025-02-24 11:34:17 +00:00
linear.py Merge remote-tracking branch 'upstream/develop-0.2.2' into support-fp8 2025-02-24 11:58:10 +00:00
models.py support absorb for prefill long context 2025-02-25 08:52:02 +00:00
RoPE.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
triton_attention.py Update triton_attention.py 2025-02-15 15:41:01 +08:00