qiyuxinlin
|
c6aa379de2
|
support safetensor load, delete architectures argument
|
2025-05-09 10:38:29 +00:00 |
|
Aubrey Li
|
b3a1fcf471
|
ktransformers/utils: fix _get_logits_warper error
|
2025-05-01 08:13:09 +08:00 |
|
dongjw
|
1b7672937b
|
update install doc and fix local_chat bug
|
2025-04-03 12:42:41 +08:00 |
|
Atream
|
25cee5810e
|
add balance-serve, support concurrence
|
2025-03-31 22:55:32 +08:00 |
|
Atream
|
d453c320f1
|
fix flashinfer precision
|
2025-03-07 14:07:00 +00:00 |
|
Atream
|
f35e8d41d8
|
support chunk prefill, support 139K context for 24G VRAM
|
2025-03-01 11:28:25 +00:00 |
|
Shuaiyi
|
a34a25d5cc
|
Delete unused code
|
2025-02-27 13:18:19 +00:00 |
|
Atream
|
85e2cc7bf4
|
Merge pull request #719 from kvcache-ai/fix-use-generation-json
use generation config from json file in official repo
|
2025-02-27 19:49:41 +08:00 |
|
Atream
|
e645d84794
|
use generation config from json file in official repo
|
2025-02-27 11:48:34 +00:00 |
|
Atream
|
8db6a4d402
|
Merge branch 'main' into main
|
2025-02-27 12:12:32 +08:00 |
|
Atream
|
477ac28a9c
|
fix-update-flashinfer_wrapper_local_chat
|
2025-02-25 12:47:31 +00:00 |
|
Atream
|
b443c7dfa2
|
Merge pull request #657 from kvcache-ai/feat-absorb-for-long-prefill
Feat absorb for long prefill
|
2025-02-25 16:53:21 +08:00 |
|
Atream
|
f4c198bd42
|
support absorb for prefill long context
|
2025-02-25 08:52:02 +00:00 |
|
Azure
|
ca7366d2db
|
Merge remote-tracking branch 'upstream/develop-0.2.2' into support-fp8
|
2025-02-24 11:58:10 +00:00 |
|
Azure
|
581a524f65
|
Add data loader to read special weights for fp8; Add special weight process script
|
2025-02-24 11:34:17 +00:00 |
|
Yuhao Tsui
|
cea07d1998
|
Feat: Clear cache during weight loading to prevent OOM on GPUs with <=8GB VRAM
This change explicitly clears CUDA cache during weight loading to mitigate memory fragmentation issues, particularly beneficial for low-VRAM GPUs.
|
2025-02-24 10:09:42 +08:00 |
|
Atream
|
e8e02e5ccc
|
support Moonlight
|
2025-02-23 14:21:18 +00:00 |
|
Azure
|
7b7c6a657d
|
Add fp8 linear kernel;\n Add empty cache to fit in 16G VRAM; By 'wkGCaSS - 知乎 https://zhuanlan.zhihu.com/p/25491611225'
|
2025-02-22 13:05:08 +00:00 |
|
Atream
|
5ec33d046d
|
optimize gguf dequant, save mem, support Q2_K
use marlin for lm_head, lm_head only calc last token for prefill
extend context window to 19K for DeepSeek-V3/R1 within 24GB VRAM
|
2025-02-22 06:13:01 +00:00 |
|
Atream
|
038bc30888
|
fix precision bug imported by position_ids in 0.2.0
|
2025-02-17 09:23:14 +00:00 |
|
Atream
|
1946493f2d
|
warm_up before capture
|
2025-02-14 15:52:21 +00:00 |
|
Atream
|
bb35dc5b0d
|
init support for MLA using Attention kernel
|
2025-02-13 15:01:14 +00:00 |
|
liam
|
d07087a7e2
|
⚡ support R1 force thinking
|
2025-02-11 15:43:41 +08:00 |
|
yangshen
|
ee72cee050
|
Fix: the tokens return by prefill_and_generate
|
2024-09-05 05:29:23 +00:00 |
|
chenxl
|
4d1d561d28
|
[feature] release 0.1.3
|
2024-08-28 16:11:43 +00:00 |
|
Atream
|
412055d450
|
[feature] experts can be injected using CPUInfer
[fix] fix ktransformers interface when use new CUDAGraphRunner
[fix] fix YAML and optimize logic, the top rule has the highest priority
|
2024-08-14 16:10:54 +08:00 |
|
chenxl
|
f5f79f5c0e
|
[ADD] support multi-gpu qlen>1 q5_k
|
2024-08-12 11:41:26 +00:00 |
|
chenxl
|
18c42e67df
|
Initial commit
|
2024-07-27 16:06:58 +08:00 |
|