Commit graph

323 commits

Author SHA1 Message Date
Atream
5ec33d046d optimize gguf dequant, save mem, support Q2_K
use marlin for lm_head, lm_head only calc last token for prefill
extend context window to 19K for DeepSeek-V3/R1 within 24GB VRAM
2025-02-22 06:13:01 +00:00
Atream
7e1fe256c8 optimize GPU 2025-02-21 05:06:57 +00:00
Azure
25c5bddd08
Merge pull request #506 from makllama/musa
feat: Support Moore Threads GPU
2025-02-20 22:50:31 +08:00
liam
97e1dc97f6 fix .so bug 2025-02-20 21:24:46 +08:00
xv44586
b05cc5685d fix: fix SSE formatting 2025-02-20 15:01:35 +08:00
Xiaodong Ye
2207f6cd14 feat: Support Moore Threads GPU
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-02-19 18:26:55 +08:00
Atream
a529518346 clean PR code and disable flashinfer 2025-02-19 04:42:47 +00:00
Atream
cf4da5fd47
Merge pull request #382 from ceerRep/server-prefix-cache
fix server and add prefix cache for server
2025-02-19 12:33:36 +08:00
ceerrep
584c7d5639 fix: object type for non-streaming response 2025-02-18 23:50:28 +08:00
ceerrep
6d45871de8 fix: workaround return dummy usage 2025-02-18 22:39:49 +08:00
liam
75d7d423b0 🔖 release v0.2.1.post1 2025-02-18 20:45:48 +08:00
ceerrep
db61375f1a Merge remote-tracking branch 'kvai/main' into server-prefix-cache 2025-02-18 17:51:22 +08:00
liam
592e13d453 add mmlu_pro test 2025-02-18 14:43:38 +08:00
liam
07a0555016 fix device and add test 2025-02-18 12:52:17 +08:00
ceerrep
73d072f609 Merge branch 'fix_precision_MLA' of https://github.com/kvcache-ai/ktransformers into server-prefix-cache 2025-02-18 11:44:28 +08:00
ceerrep
2ffb43f974 fix: adapt prefix cache in forward_linux_flashinfer 2025-02-18 11:40:15 +08:00
Xie Weiyu
f029588b61 fix server warmup 2025-02-18 11:39:45 +08:00
ceerrep
c70b6f4d5b fix: use 'cuda:0' by default if torch_device is 'cuda' 2025-02-18 11:15:17 +08:00
Xie Weiyu
c176e516b5 server mix mla 2025-02-17 20:40:28 +08:00
ceerrep
ee24eb8dc3 fix: fix server for triton kernel 2025-02-17 18:08:45 +08:00
ceerrep
bb1cadfff3 Merge branch 'fix_precision_MLA' of https://github.com/kvcache-ai/ktransformers into server-prefix-cache 2025-02-17 18:08:04 +08:00
Atream
038bc30888 fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
ceerrep
cd9f7f8f34 fix: server: drop <think> tag in chat template 2025-02-17 14:25:27 +08:00
ceerrep
ca2090d89b feat: use model name in openai endpoint 2025-02-17 00:27:32 +08:00
ceerrep
5ac266085e fix: use flash_attn for faster prefill 2025-02-17 00:11:24 +08:00
ceerrep
bb0ccc7b1a feat: add prefix cache for server 2025-02-17 00:10:55 +08:00
Atream
b84524622e support bf16 read 2025-02-16 06:43:27 +00:00
Azure
ff6b265e53 Mock triton mla due to precision issue 2025-02-16 06:03:12 +00:00
Atream
c5f036e8a4
Merge pull request #333 from kvcache-ai/feat_experts_gpu
toy support for experts on GPU, no CUDA Graph
2025-02-15 23:30:24 +08:00
Atream
c189d55bd1 toy support for experts on GPU, no CUDA Graph 2025-02-15 15:16:00 +00:00
wang jiahao
ae8da019c1
Merge pull request #330 from hrz6976/fix-nonetype
thanks..., I was about to submit and found that you had already modified it. Thank you for your contribution
2025-02-15 22:50:51 +08:00
Azure
56382aa869
Merge pull request #290 from ZhangShuaiyi/dev/check_gguf_path
ensure that gguf_path argument is a directory.
2025-02-15 22:48:57 +08:00
12f23eddde
4516282ccc Fix NoneType object has no attribute zero_ 2025-02-15 22:04:45 +08:00
MuWinds
d3b45d5704
Merge branch 'kvcache-ai:main' into main 2025-02-15 21:57:08 +08:00
UnicornChan
65d73ea3f8
Merge pull request #317 from kvcache-ai/develop-0.2.1
[feature] update  docker image and entrypoint
2025-02-15 16:19:18 +08:00
chenxl
0e4b7a3929 [feature] update docker image and entrypoint 2025-02-15 07:55:33 +00:00
Atream
92399283b6
Update attention.py 2025-02-15 15:43:35 +08:00
Atream
d90749d35d
Update triton_attention.py 2025-02-15 15:41:01 +08:00
Shuaiyi
22280bf17f get dirname if gguf_path is a file 2025-02-15 07:08:22 +00:00
MuWinds
f74c2d1d17
Solve torch.backends.cuda.sdp_kernel() is deprecated. 2025-02-15 12:41:51 +08:00
Atream
1946493f2d warm_up before capture 2025-02-14 15:52:21 +00:00
Atream
885a91e7db
Merge pull request #294 from kvcache-ai/feat-fast-MLA
Feat fast mla
2025-02-14 19:40:36 +08:00
Atream
1084d4e4b4 linux support triton MLA kernel 2025-02-14 11:38:55 +00:00
Azure
95c81eaf01
Merge branch 'kvcache-ai:main' into main 2025-02-14 11:53:52 +08:00
Azure
b7653b9c4f add V3/R1 8 gpu yaml example 2025-02-14 02:56:13 +00:00
Azure
ae5d9e11a9
Merge pull request #227 from hrz6976/main
Add a lock to server inference()
2025-02-14 10:35:11 +08:00
Atream
bb35dc5b0d init support for MLA using Attention kernel 2025-02-13 15:01:14 +00:00
hrz6976
2c3dcd9774 Add a lock to server inference() 2025-02-13 10:05:22 +00:00
ZiWei Yuan
76b081879a
Merge pull request #224 from kvcache-ai/server_support
Server support
2025-02-13 17:28:08 +08:00
liam
8d5ebe49ab 📝 fix some debug output and update doc 2025-02-13 17:25:12 +08:00