Azure
|
ca7366d2db
|
Merge remote-tracking branch 'upstream/develop-0.2.2' into support-fp8
|
2025-02-24 11:58:10 +00:00 |
|
Azure
|
581a524f65
|
Add data loader to read special weights for fp8; Add special weight process script
|
2025-02-24 11:34:17 +00:00 |
|
Azure
|
7b7c6a657d
|
Add fp8 linear kernel;\n Add empty cache to fit in 16G VRAM; By 'wkGCaSS - 知乎 https://zhuanlan.zhihu.com/p/25491611225'
|
2025-02-22 13:05:08 +00:00 |
|
Atream
|
024009675e
|
Merge branch 'main' into feat-more-context
|
2025-02-22 06:17:39 +00:00 |
|
Atream
|
5ec33d046d
|
optimize gguf dequant, save mem, support Q2_K
use marlin for lm_head, lm_head only calc last token for prefill
extend context window to 19K for DeepSeek-V3/R1 within 24GB VRAM
|
2025-02-22 06:13:01 +00:00 |
|
Atream
|
a529518346
|
clean PR code and disable flashinfer
|
2025-02-19 04:42:47 +00:00 |
|
ceerrep
|
2ffb43f974
|
fix: adapt prefix cache in forward_linux_flashinfer
|
2025-02-18 11:40:15 +08:00 |
|
ceerrep
|
bb1cadfff3
|
Merge branch 'fix_precision_MLA' of https://github.com/kvcache-ai/ktransformers into server-prefix-cache
|
2025-02-17 18:08:04 +08:00 |
|
Atream
|
038bc30888
|
fix precision bug imported by position_ids in 0.2.0
|
2025-02-17 09:23:14 +00:00 |
|
ceerrep
|
5ac266085e
|
fix: use flash_attn for faster prefill
|
2025-02-17 00:11:24 +08:00 |
|
Azure
|
ff6b265e53
|
Mock triton mla due to precision issue
|
2025-02-16 06:03:12 +00:00 |
|
Atream
|
c5f036e8a4
|
Merge pull request #333 from kvcache-ai/feat_experts_gpu
toy support for experts on GPU, no CUDA Graph
|
2025-02-15 23:30:24 +08:00 |
|
Atream
|
c189d55bd1
|
toy support for experts on GPU, no CUDA Graph
|
2025-02-15 15:16:00 +00:00 |
|
Atream
|
92399283b6
|
Update attention.py
|
2025-02-15 15:43:35 +08:00 |
|
Atream
|
d90749d35d
|
Update triton_attention.py
|
2025-02-15 15:41:01 +08:00 |
|
Atream
|
885a91e7db
|
Merge pull request #294 from kvcache-ai/feat-fast-MLA
Feat fast mla
|
2025-02-14 19:40:36 +08:00 |
|
Atream
|
1084d4e4b4
|
linux support triton MLA kernel
|
2025-02-14 11:38:55 +00:00 |
|
Atream
|
bb35dc5b0d
|
init support for MLA using Attention kernel
|
2025-02-13 15:01:14 +00:00 |
|
liam
|
8d5ebe49ab
|
📝 ⚡ fix some debug output and update doc
|
2025-02-13 17:25:12 +08:00 |
|
liam
|
c74453d8ca
|
📝 add doc support and fix bug in qwen2
|
2025-02-13 16:37:43 +08:00 |
|
liam
|
83401dbb3b
|
⚡ ready to publish
|
2025-02-10 12:29:23 +08:00 |
|
Azure
|
c4d9bc6670
|
support KExpertsMarlin backend
|
2025-02-07 05:57:40 +00:00 |
|
Azure
|
907251c743
|
done support deepseekv3
|
2025-02-04 15:53:38 +00:00 |
|
Azure
|
f748cd29f0
|
fix rope; update moegate
|
2025-02-01 18:05:45 +00:00 |
|
Azure
|
f873558a89
|
update rope calculation; update modeling.py; update gate for moe
|
2025-02-01 07:32:21 +00:00 |
|
Azure
|
476b1d8dc6
|
support deepseekv3; runable but have precition problem
|
2025-01-31 08:27:24 +00:00 |
|
xhedit
|
234faf7987
|
typo fix: KMisrtal -> KMistral
|
2024-09-12 15:58:01 +00:00 |
|
Azure
|
c55de02f7b
|
fix qlen > 1000 mask is none error
|
2024-09-02 02:58:10 +00:00 |
|
TangJingqi
|
6735beb5b6
|
Fix cannot offload whole layer in cpu
|
2024-08-29 19:10:14 +08:00 |
|
chenxl
|
4d1d561d28
|
[feature] release 0.1.3
|
2024-08-28 16:11:43 +00:00 |
|
TangJingqi
|
67043b4b5c
|
[fix] format classes and files name
|
2024-08-15 10:44:59 +08:00 |
|
Atream
|
412055d450
|
[feature] experts can be injected using CPUInfer
[fix] fix ktransformers interface when use new CUDAGraphRunner
[fix] fix YAML and optimize logic, the top rule has the highest priority
|
2024-08-14 16:10:54 +08:00 |
|
chenxl
|
f5f79f5c0e
|
[ADD] support multi-gpu qlen>1 q5_k
|
2024-08-12 11:41:26 +00:00 |
|
chenht2022
|
c1cc7d2cd2
|
1) Linear and MLP operators support qlen>1; 2) All operators now share a single memory buffer; 3) Refactor CPUInfer submit/sync logic.
|
2024-08-08 09:04:36 +00:00 |
|
chenxl
|
18c42e67df
|
Initial commit
|
2024-07-27 16:06:58 +08:00 |
|