Commit graph

37 commits

Author SHA1 Message Date
Atream
024009675e Merge branch 'main' into feat-more-context 2025-02-22 06:17:39 +00:00
Atream
7e1fe256c8 optimize GPU 2025-02-21 05:06:57 +00:00
Atream
a529518346 clean PR code and disable flashinfer 2025-02-19 04:42:47 +00:00
ceerrep
584c7d5639 fix: object type for non-streaming response 2025-02-18 23:50:28 +08:00
ceerrep
6d45871de8 fix: workaround return dummy usage 2025-02-18 22:39:49 +08:00
ceerrep
73d072f609 Merge branch 'fix_precision_MLA' of https://github.com/kvcache-ai/ktransformers into server-prefix-cache 2025-02-18 11:44:28 +08:00
Xie Weiyu
f029588b61 fix server warmup 2025-02-18 11:39:45 +08:00
ceerrep
c70b6f4d5b fix: use 'cuda:0' by default if torch_device is 'cuda' 2025-02-18 11:15:17 +08:00
Xie Weiyu
c176e516b5 server mix mla 2025-02-17 20:40:28 +08:00
ceerrep
ee24eb8dc3 fix: fix server for triton kernel 2025-02-17 18:08:45 +08:00
ceerrep
cd9f7f8f34 fix: server: drop <think> tag in chat template 2025-02-17 14:25:27 +08:00
ceerrep
ca2090d89b feat: use model name in openai endpoint 2025-02-17 00:27:32 +08:00
ceerrep
bb0ccc7b1a feat: add prefix cache for server 2025-02-17 00:10:55 +08:00
hrz6976
2c3dcd9774 Add a lock to server inference() 2025-02-13 10:05:22 +00:00
liam
4385e85096 support force thinking 2025-02-12 12:43:53 +08:00
liam
6f3a39be08 update force_think config 2025-02-12 12:10:16 +08:00
liam
e536e1420d update force_think 2025-02-12 11:42:55 +08:00
UnicornChan
7527619f53
Merge pull request #122 from kvcache-ai/feat-DeepSeekV3
[Feat] add support to DeepSeekV3
2025-02-10 13:54:46 +08:00
liam
bf1d413be0 Merge branch 'feat-DeepSeekV3' of github.com:kvcache-ai/ktransformers into feat-DeepSeekV3 2025-02-08 13:17:10 +08:00
liam
c18ecd7b7f add flush print in local_chat output and change default optimize yaml of deepseekv3 to single gpu 2025-02-08 13:15:52 +08:00
RodriMora
b1bff2a405 Added simple /models endpoint to work with frontends that don't allow bypass check like Openweb-ui 2025-02-07 10:30:39 +01:00
Azure
c4d9bc6670 support KExpertsMarlin backend 2025-02-07 05:57:40 +00:00
Azure
907251c743 done support deepseekv3 2025-02-04 15:53:38 +00:00
Azure
f873558a89 update rope calculation; update modeling.py; update gate for moe 2025-02-01 07:32:21 +00:00
Azure
476b1d8dc6 support deepseekv3; runable but have precition problem 2025-01-31 08:27:24 +00:00
liam
04cebec4bb rm opt config path default value and fix some config logic bug 2024-11-14 20:02:30 +08:00
liam
c2b4dc805c 🚑️:roll back transformer.py and find that it's multiple chat hsitory have minor accurate error 2024-11-04 14:02:19 +08:00
liam
a148da2cfe : rm sensitive info in config.yaml, add readme of makefile. support old model_path config 2024-11-04 14:02:19 +08:00
anyanqilin
9a2e7057c8 wjh fix change 2024-11-04 14:02:19 +08:00
anyanqilin
2d67016d14 wjh-change 2024-11-04 14:02:19 +08:00
liam
7c94df4bcf 🚑️: back transformer.py bugs version, and fix typo error in local_chat.py 2024-11-04 14:02:19 +08:00
liam
dd1d8667f3 : refactor local_chat and fix message slice bug in server 2024-11-04 14:02:19 +08:00
chenxl
4d1d561d28 [feature] release 0.1.3 2024-08-28 16:11:43 +00:00
chenxl
b9f0819a86 None for load config 2024-08-22 15:52:25 +00:00
TangJingqi
170b7a6001 fix server don't accept yaml path as param; fix server static cache device problem 2024-08-21 14:19:43 +08:00
Atream
412055d450 [feature] experts can be injected using CPUInfer
[fix] fix ktransformers interface when use new CUDAGraphRunner
[fix] fix YAML and optimize logic, the top rule has the highest priority
2024-08-14 16:10:54 +08:00
chenxl
18c42e67df Initial commit 2024-07-27 16:06:58 +08:00