mirror of
https://github.com/patw/AudioSumma.git
synced 2025-04-22 16:49:09 +00:00
Mention you need llama.cpp and whisper.cpp running in server mode
This commit is contained in:
parent
f8c7d09a59
commit
fe50d40fce
1 changed files with 5 additions and 1 deletions
|
@ -2,7 +2,7 @@
|
|||
|
||||
Record your local audio and summarize it with whisper.cpp and llama.cpp! Open source, local on-prem transcription and summarization!
|
||||
|
||||
## AudioSumma Installation
|
||||
## Installation
|
||||
|
||||
```
|
||||
pip install -r requirements.txt
|
||||
|
@ -12,6 +12,10 @@ pip install -r requirements.txt
|
|||
|
||||
Copy sample.env to .env and point your endpoint URLs for a working llama.cpp and whisper.cpp running in server/api mode.
|
||||
|
||||
## llama.cpp and whisper.cpp
|
||||
|
||||
These need to be running in server mode somewhere on your local machine or on your network. Make sure the PROMPT_FORMAT in your .env file matches exactly to what the LLM model expects.
|
||||
|
||||
## Running
|
||||
|
||||
Run either meetings.bat or meetings.sh to start app.
|
||||
|
|
Loading…
Add table
Reference in a new issue