Mention you need llama.cpp and whisper.cpp running in server mode

This commit is contained in:
Pat Wendorf 2024-06-21 12:56:41 -04:00
parent f8c7d09a59
commit fe50d40fce

View file

@ -2,7 +2,7 @@
Record your local audio and summarize it with whisper.cpp and llama.cpp! Open source, local on-prem transcription and summarization!
## AudioSumma Installation
## Installation
```
pip install -r requirements.txt
@ -12,6 +12,10 @@ pip install -r requirements.txt
Copy sample.env to .env and point your endpoint URLs for a working llama.cpp and whisper.cpp running in server/api mode.
## llama.cpp and whisper.cpp
These need to be running in server mode somewhere on your local machine or on your network. Make sure the PROMPT_FORMAT in your .env file matches exactly to what the LLM model expects.
## Running
Run either meetings.bat or meetings.sh to start app.