Yuhao Tsui
|
cea07d1998
|
Feat: Clear cache during weight loading to prevent OOM on GPUs with <=8GB VRAM
This change explicitly clears CUDA cache during weight loading to mitigate memory fragmentation issues, particularly beneficial for low-VRAM GPUs.
|
2025-02-24 10:09:42 +08:00 |
|
Atream
|
e8e02e5ccc
|
support Moonlight
|
2025-02-23 14:21:18 +00:00 |
|
Atream
|
5ec33d046d
|
optimize gguf dequant, save mem, support Q2_K
use marlin for lm_head, lm_head only calc last token for prefill
extend context window to 19K for DeepSeek-V3/R1 within 24GB VRAM
|
2025-02-22 06:13:01 +00:00 |
|
Atream
|
038bc30888
|
fix precision bug imported by position_ids in 0.2.0
|
2025-02-17 09:23:14 +00:00 |
|
Atream
|
1946493f2d
|
warm_up before capture
|
2025-02-14 15:52:21 +00:00 |
|
Atream
|
bb35dc5b0d
|
init support for MLA using Attention kernel
|
2025-02-13 15:01:14 +00:00 |
|
liam
|
d07087a7e2
|
⚡ support R1 force thinking
|
2025-02-11 15:43:41 +08:00 |
|
yangshen
|
ee72cee050
|
Fix: the tokens return by prefill_and_generate
|
2024-09-05 05:29:23 +00:00 |
|
chenxl
|
4d1d561d28
|
[feature] release 0.1.3
|
2024-08-28 16:11:43 +00:00 |
|
Atream
|
412055d450
|
[feature] experts can be injected using CPUInfer
[fix] fix ktransformers interface when use new CUDAGraphRunner
[fix] fix YAML and optimize logic, the top rule has the highest priority
|
2024-08-14 16:10:54 +08:00 |
|
chenxl
|
f5f79f5c0e
|
[ADD] support multi-gpu qlen>1 q5_k
|
2024-08-12 11:41:26 +00:00 |
|
chenxl
|
18c42e67df
|
Initial commit
|
2024-07-27 16:06:58 +08:00 |
|