Commit graph

18 commits

Author SHA1 Message Date
Atream
f4c198bd42 support absorb for prefill long context 2025-02-25 08:52:02 +00:00
DDong Jianwei
95d937c51d tmp 2025-02-23 18:51:42 +08:00
Atream
a529518346 clean PR code and disable flashinfer 2025-02-19 04:42:47 +00:00
ceerrep
2ffb43f974 fix: adapt prefix cache in forward_linux_flashinfer 2025-02-18 11:40:15 +08:00
ceerrep
bb1cadfff3 Merge branch 'fix_precision_MLA' of https://github.com/kvcache-ai/ktransformers into server-prefix-cache 2025-02-17 18:08:04 +08:00
Atream
038bc30888 fix precision bug imported by position_ids in 0.2.0 2025-02-17 09:23:14 +00:00
ceerrep
5ac266085e fix: use flash_attn for faster prefill 2025-02-17 00:11:24 +08:00
Azure
ff6b265e53 Mock triton mla due to precision issue 2025-02-16 06:03:12 +00:00
Atream
92399283b6
Update attention.py 2025-02-15 15:43:35 +08:00
Atream
1084d4e4b4 linux support triton MLA kernel 2025-02-14 11:38:55 +00:00
Atream
bb35dc5b0d init support for MLA using Attention kernel 2025-02-13 15:01:14 +00:00
Azure
907251c743 done support deepseekv3 2025-02-04 15:53:38 +00:00
Azure
f873558a89 update rope calculation; update modeling.py; update gate for moe 2025-02-01 07:32:21 +00:00
Azure
476b1d8dc6 support deepseekv3; runable but have precition problem 2025-01-31 08:27:24 +00:00
Azure
c55de02f7b fix qlen > 1000 mask is none error 2024-09-02 02:58:10 +00:00
chenxl
4d1d561d28 [feature] release 0.1.3 2024-08-28 16:11:43 +00:00
TangJingqi
67043b4b5c [fix] format classes and files name 2024-08-15 10:44:59 +08:00
chenxl
18c42e67df Initial commit 2024-07-27 16:06:58 +08:00