kvcache-ai-ktransformers/ktransformers/models
wang jiahao 8456222852
Some checks failed
Book-CI / test (push) Has been cancelled
Deploy / deploy (macos-latest) (push) Has been cancelled
Deploy / deploy (ubuntu-latest) (push) Has been cancelled
Deploy / deploy (windows-latest) (push) Has been cancelled
Merge pull request #1276 from kvcache-ai/support_load_safetensor
support safetensor load, delete architectures argument
2025-05-12 11:10:26 +08:00
..
__init__.py Initial commit 2024-07-27 16:06:58 +08:00
configuration_deepseek.py Initial commit 2024-07-27 16:06:58 +08:00
configuration_deepseek_v3.py add balance-serve, support concurrence 2025-03-31 22:55:32 +08:00
configuration_llama.py [feature] release 0.1.3 2024-08-28 16:11:43 +00:00
configuration_qwen2_moe.py support qwen3, dont speak human language 2025-04-28 08:44:47 +00:00
configuration_qwen3_moe.py support qwen3, dont speak human language 2025-04-28 08:44:47 +00:00
custom_cache.py support safetensor load, delete architectures argument 2025-05-09 10:38:29 +00:00
custom_modeling_deepseek_v2.py fix-hopper-flashinfer 2025-04-29 11:06:34 +08:00
custom_modeling_deepseek_v3.py fix-hopper-flashinfer 2025-04-29 11:06:34 +08:00
custom_modeling_qwen2_moe.py support safetensor load, delete architectures argument 2025-05-09 10:38:29 +00:00
custom_modeling_qwen3_moe.py support safetensor load, delete architectures argument 2025-05-09 10:38:29 +00:00
modeling_deepseek.py optimize gguf dequant, save mem, support Q2_K 2025-02-22 06:13:01 +00:00
modeling_deepseek_v3.py modeling_deepseek_v3: fix GenerationMixin warning 2025-05-01 07:48:15 +08:00
modeling_llama.py [feature] release 0.1.3 2024-08-28 16:11:43 +00:00
modeling_mixtral.py [ADD] support multi-gpu qlen>1 q5_k 2024-08-12 11:41:26 +00:00
modeling_qwen2_moe.py Initial commit 2024-07-27 16:06:58 +08:00
modeling_qwen3_moe.py support qwen3, dont speak human language 2025-04-28 08:44:47 +00:00