kvcache-ai-ktransformers/ktransformers/operators
Aubrey Li d347aeb518 VLinearMarlin: padding to input.shape[0] to avoid CUDA error
Fix the following runtime error with --no-use_cuda_graph option

Traceback (most recent call last):
  File "/home/aubrey/miniforge3/envs/kt/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/home/aubrey/miniforge3/envs/kt/lib/python3.11/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/aubrey/miniforge3/envs/kt/lib/python3.11/site-packages/ktransformers/server/backend/interfaces/balance_serve.py", line 282, in run_engine
    engine.loop()
  File "/home/aubrey/miniforge3/envs/kt/lib/python3.11/site-packages/ktransformers/server/backend/interfaces/balance_serve.py", line 234, in loop
    self.model_runner.run(self.batch, self.query_manager)
  File "/home/aubrey/miniforge3/envs/kt/lib/python3.11/site-packages/ktransformers/server/balance_serve/inference/model_runner.py", line 220, in run
    self.output.logits[0] = self.output.logits[0][self.input[cuda_graph_idx].minibatch.logits_start]
                            ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
2025-05-18 15:11:37 +08:00
..
__init__.py Initial commit 2024-07-27 16:06:58 +08:00
attention.py Enable support for Intel XPU devices, add support for DeepSeek V2/V3 first 2025-05-14 19:37:27 +00:00
balance_serve_attention.py update torch MLA kernel 2025-05-14 09:45:12 +00:00
base_operator.py support safetensor load, delete architectures argument 2025-05-09 10:38:29 +00:00
cpuinfer.py cpuinfer: filter repeated backend instantiation 2025-03-10 22:03:04 +08:00
dynamic_attention.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00
experts.py fix deduplicate_and_sort cudagraphs 2025-05-15 04:09:34 +00:00
flashinfer_batch_prefill_wrapper.py support safetensor load, delete architectures argument 2025-05-09 10:38:29 +00:00
flashinfer_wrapper.py fix-hopper-flashinfer 2025-04-29 11:06:34 +08:00
gate.py Enable support for Intel XPU devices, add support for DeepSeek V2/V3 first 2025-05-14 19:37:27 +00:00
layernorm.py Enable support for Intel XPU devices, add support for DeepSeek V2/V3 first 2025-05-14 19:37:27 +00:00
linear.py VLinearMarlin: padding to input.shape[0] to avoid CUDA error 2025-05-18 15:11:37 +08:00
mlp.py support safetensor load, delete architectures argument 2025-05-09 10:38:29 +00:00
models.py Enable support for Intel XPU devices, add support for DeepSeek V2/V3 first 2025-05-14 19:37:27 +00:00
RoPE.py support safetensor load, delete architectures argument 2025-05-09 10:38:29 +00:00
triton_attention.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00
triton_attention_prefill.py merge main; Add torch q8 linear 2025-03-14 05:52:07 -04:00