mirror of
https://github.com/kvcache-ai/ktransformers.git
synced 2025-09-06 12:40:02 +00:00
31 lines
No EOL
1.8 KiB
Markdown
31 lines
No EOL
1.8 KiB
Markdown
<div align="center">
|
|
<!-- <h1>KTransformers</h1> -->
|
|
<p align="center">
|
|
|
|
<picture>
|
|
<img alt="KTransformers" src="https://github.com/user-attachments/assets/d5a2492f-a415-4456-af99-4ab102f13f8b" width=50%>
|
|
|
|
</picture>
|
|
|
|
</p>
|
|
|
|
</div>
|
|
|
|
<h2 id="intro">🎉 Introduction</h2>
|
|
KTransformers, pronounced as Quick Transformers, is designed to enhance your 🤗 <a href="https://github.com/huggingface/transformers">Transformers</a> experience with advanced kernel optimizations and placement/parallelism strategies.
|
|
<br/><br/>
|
|
KTransformers is a flexible, Python-centric framework designed with extensibility at its core.
|
|
By implementing and injecting an optimized module with a single line of code, users gain access to a Transformers-compatible
|
|
interface, RESTful APIs compliant with OpenAI and Ollama, and even a simplified ChatGPT-like web UI.
|
|
<br/><br/>
|
|
Our vision for KTransformers is to serve as a flexible platform for experimenting with innovative LLM inference optimizations. Please let us know if you need any other features.
|
|
|
|
<h2 id="Updates">🔥 Updates</h2>
|
|
|
|
* **Feb 10, 2025**: Support Deepseek-R1 and V3 on single (24GB VRAM)/multi gpu and 382G DRAM, up to 3~28x speedup. The detailed tutorial is [here](./doc/en/DeepseekR1_V3_tutorial.md).
|
|
* **Aug 28, 2024**: Support 1M context under the InternLM2.5-7B-Chat-1M model, utilizing 24GB of VRAM and 150GB of DRAM. The detailed tutorial is [here](./doc/en/long_context_tutorial.md).
|
|
* **Aug 28, 2024**: Decrease DeepseekV2's required VRAM from 21G to 11G.
|
|
* **Aug 15, 2024**: Update detailed [TUTORIAL](doc/en/injection_tutorial.md) for injection and multi-GPU.
|
|
* **Aug 14, 2024**: Support llamfile as linear backend.
|
|
* **Aug 12, 2024**: Support multiple GPU; Support new model: mixtral 8\*7B and 8\*22B; Support q2k, q3k, q5k dequant on gpu.
|
|
* **Aug 9, 2024**: Support windows native. |