From 35adc76337bd39d7690e57ff096dc9686aa0b7f2 Mon Sep 17 00:00:00 2001 From: "Li, Zonghang" <870644199@qq.com> Date: Mon, 7 Apr 2025 22:08:14 +0800 Subject: [PATCH] Update README.md --- README.md | 14 ++------------ 1 file changed, 2 insertions(+), 12 deletions(-) diff --git a/README.md b/README.md index 8909b249..c79e9643 100644 --- a/README.md +++ b/README.md @@ -9,19 +9,9 @@ Worried about OOM or your device stucking? Never again! prima.cpp keeps its **me How about speed? prima.cpp is built on [llama.cpp](https://github.com/ggerganov/llama.cpp), but it’s **15x faster!** 🚀 On my poor devices, QwQ-32B generates 11 tokens per second, and Llama 3-70B generates 1.5 tokens per second. That's about the same speed as audiobook apps, from slow to fast speaking. We plan to power a **Home Siri** soon, then we can have private chats without privacy concerns. -
- -
Prima.cpp vs llama.cpp on QwQ 32B.
-
+https://github.com/Lizonghang/prima.cpp/raw/main/figures/qwq%2032b.mp4 -
- -
Prima.cpp vs llama.cpp on DeepSeek R1 70B
-
+https://github.com/Lizonghang/prima.cpp/raw/main/figures/qwq%2032b.mp4 And, if your devices are more powerful, you could unlock even more possibilities, like running LLM agents right in your home! If you do, we’d love to hear about it, just share your cluster setup and token throughput with us!