Commit graph

116 commits

Author SHA1 Message Date
Azure
662c1e4c14 small fix about max new token 2025-03-05 09:25:41 +00:00
wang jiahao
48b9800790
Merge pull request #759 from 3wweiweiwu/fix_top_p_typo
fix typo for top_p
2025-03-02 13:58:11 +08:00
1668068727@qq.com
7cdf8139f0 fix ollama api temperature bug 2025-03-02 13:55:26 +08:00
Wix Woo
3aa0cfc29d fix typo for top_p 2025-03-01 20:15:36 +00:00
Atream
ca1dc1e7d1
Merge branch 'main' into main 2025-03-01 23:24:10 +08:00
Atream
fa03ea48dd Merge branch 'main' into feat-chunk-prefill-flashinfer 2025-03-01 11:35:09 +00:00
Atream
f35e8d41d8 support chunk prefill, support 139K context for 24G VRAM 2025-03-01 11:28:25 +00:00
liam
80e0536fb0 Merge branch 'main' of https://github.com/KMSorSMS/ktransformers into main 2025-03-01 00:12:21 +08:00
liam
8ddc990668 fix server cache lens 2025-03-01 00:09:57 +08:00
qiyuxinlin
22df52e94e fix temperature 2025-02-27 21:00:44 +08:00
lazymio
b121ca4df8
Fix according to upstream changes 2025-02-27 18:11:35 +08:00
wang jiahao
26f7b4af11
Merge branch 'main' into temperature_top_p_from_request 2025-02-27 18:08:55 +08:00
Atream
798e1d0cfa
Merge pull request #532 from xv44586/fix-sse-formatting
fix: fix SSE formatting
2025-02-27 12:19:23 +08:00
Atream
f403cde6d4
Merge pull request #650 from ceerRep/main
feat: basic api key support
2025-02-27 12:16:53 +08:00
swu-hyk
ec7e912fee modify 2025-02-26 19:21:30 +08:00
swu-hyk
68e7df3a25 implementation of chat routing for Ollama 2025-02-26 17:05:00 +08:00
Atream
b443c7dfa2
Merge pull request #657 from kvcache-ai/feat-absorb-for-long-prefill
Feat absorb for long prefill
2025-02-25 16:53:21 +08:00
Atream
f4c198bd42 support absorb for prefill long context 2025-02-25 08:52:02 +00:00
Azure
36fbeee341 Update doc 2025-02-25 08:21:18 +00:00
ceerrep
f639fbc19e feat: basic api key support 2025-02-25 14:11:39 +08:00
Azure
4dc5518e4d update fp8 kernel tutorial 2025-02-24 15:37:01 +00:00
lazymio
07eb712a73
Left out 2025-02-24 21:51:14 +08:00
lazymio
91062a834f
Default values 2025-02-24 21:38:01 +08:00
lazymio
76487c4dcb
Revert repetition_penalty as it is not in API spec 2025-02-24 21:30:03 +08:00
lazymio
05ad288453
Also /chat/completions 2025-02-24 21:08:36 +08:00
lazymio
bf36547f98
Also allow repetition_penalty 2025-02-24 21:07:35 +08:00
lazymio
8704c09192
Allow temperature and top_p from requests 2025-02-24 21:01:33 +08:00
Atream
024009675e Merge branch 'main' into feat-more-context 2025-02-22 06:17:39 +00:00
Atream
7e1fe256c8 optimize GPU 2025-02-21 05:06:57 +00:00
xv44586
b05cc5685d fix: fix SSE formatting 2025-02-20 15:01:35 +08:00
Atream
a529518346 clean PR code and disable flashinfer 2025-02-19 04:42:47 +00:00
ceerrep
584c7d5639 fix: object type for non-streaming response 2025-02-18 23:50:28 +08:00
ceerrep
6d45871de8 fix: workaround return dummy usage 2025-02-18 22:39:49 +08:00
ceerrep
73d072f609 Merge branch 'fix_precision_MLA' of https://github.com/kvcache-ai/ktransformers into server-prefix-cache 2025-02-18 11:44:28 +08:00
Xie Weiyu
f029588b61 fix server warmup 2025-02-18 11:39:45 +08:00
ceerrep
c70b6f4d5b fix: use 'cuda:0' by default if torch_device is 'cuda' 2025-02-18 11:15:17 +08:00
Xie Weiyu
c176e516b5 server mix mla 2025-02-17 20:40:28 +08:00
ceerrep
ee24eb8dc3 fix: fix server for triton kernel 2025-02-17 18:08:45 +08:00
ceerrep
cd9f7f8f34 fix: server: drop <think> tag in chat template 2025-02-17 14:25:27 +08:00
ceerrep
ca2090d89b feat: use model name in openai endpoint 2025-02-17 00:27:32 +08:00
ceerrep
bb0ccc7b1a feat: add prefix cache for server 2025-02-17 00:10:55 +08:00
MuWinds
f74c2d1d17
Solve torch.backends.cuda.sdp_kernel() is deprecated. 2025-02-15 12:41:51 +08:00
hrz6976
2c3dcd9774 Add a lock to server inference() 2025-02-13 10:05:22 +00:00
liam
4385e85096 support force thinking 2025-02-12 12:43:53 +08:00
liam
6f3a39be08 update force_think config 2025-02-12 12:10:16 +08:00
liam
e536e1420d update force_think 2025-02-12 11:42:55 +08:00
UnicornChan
7527619f53
Merge pull request #122 from kvcache-ai/feat-DeepSeekV3
[Feat] add support to DeepSeekV3
2025-02-10 13:54:46 +08:00
liam
bf1d413be0 Merge branch 'feat-DeepSeekV3' of github.com:kvcache-ai/ktransformers into feat-DeepSeekV3 2025-02-08 13:17:10 +08:00
liam
c18ecd7b7f add flush print in local_chat output and change default optimize yaml of deepseekv3 to single gpu 2025-02-08 13:15:52 +08:00
RodriMora
b1bff2a405 Added simple /models endpoint to work with frontends that don't allow bypass check like Openweb-ui 2025-02-07 10:30:39 +01:00