kvcache-ai-ktransformers/ktransformers/operators
Aubrey Li 12a4c631df Fix TypeError when invoke KLinearCPUInfer.forward()
Fix the following error:

  File "/home/aubrey/work/ktransformers/ktransformers/operators/linear.py", line 825, in forward
    y = self.generate_linear.forward(x, bsz_tensor)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: KLinearCPUInfer.forward() takes 2 positional arguments but 3 were given
2025-04-07 12:03:35 +08:00
..
__init__.py Initial commit 2024-07-27 16:06:58 +08:00
attention.py update install doc and fix local_chat bug 2025-04-03 12:42:41 +08:00
base_operator.py fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
cpuinfer.py cpuinfer: filter repeated backend instantiation 2025-03-10 22:03:04 +08:00
dynamic_attention.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00
experts.py add balance-serve, support concurrence 2025-03-31 22:55:32 +08:00
flashinfer_wrapper.py Fix ktransformers-server flashinfer wrapper position arg issue; 2025-04-01 07:30:23 +00:00
gate.py rm KMoEGateDeepSeekV3, fall back to KMoEGate 2025-04-01 07:13:05 +00:00
layernorm.py add balance-serve, support concurrence 2025-03-31 22:55:32 +08:00
linear.py Fix TypeError when invoke KLinearCPUInfer.forward() 2025-04-07 12:03:35 +08:00
mlp.py add balance-serve, support concurrence 2025-03-31 22:55:32 +08:00
models.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00
RoPE.py add balance-serve, support concurrence 2025-03-31 22:55:32 +08:00
triton_attention.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00
triton_attention_prefill.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00