Find a file
2025-02-20 19:18:15 -05:00
.gitignore Added some silence trimming 2024-07-30 11:56:40 -04:00
headphones.png UI looks a bit better now 2024-06-24 09:38:54 -04:00
meetings.bat first commit 2024-06-21 12:52:10 -04:00
meetings.py Added drop down for selecting the recording device. 2024-08-19 08:32:24 -04:00
meetings.sh first commit 2024-06-21 12:52:10 -04:00
notrecording.png UI looks a bit better now 2024-06-24 09:38:54 -04:00
README.md Updated instructions for new .env settings 2024-11-29 09:08:24 -05:00
recording.png UI looks a bit better now 2024-06-24 09:38:54 -04:00
requirements.txt Simplified the transcriber by using the openai library to access llama.cpp server or ollama server mode. 2024-11-29 09:06:22 -05:00
sample.env Sample env was missing the fact prompt 2025-02-20 19:18:15 -05:00
screenshot.png Replaced the screenshot to show the new recording device selector 2024-08-19 08:33:29 -04:00
summarize.py Allow for adjustment of temp, top_p and max tokens for models that have issues with repeating. 2025-02-08 08:54:29 -05:00

AudioSumma

Record your local audio and summarize it with whisper.cpp and llama.cpp! Open source, local on-prem transcription and summarization!

Main UI

Installation

pip install -r requirements.txt

Configuration

Copy sample.env to .env and point your endpoint URLs for a working llama.cpp and whisper.cpp running in server/api mode.

llama.cpp/ollama and whisper.cpp

These need to be running in server mode somewhere on your local machine or on your network. Add the endpoints to your .env

The default values are correct if you run whisper.cpp and ollama server or llama.cpp server on your local machine.

Running

Run either meetings.bat or meetings.sh to start app.

Usage

Hit record to record your global audio, hit stop to save the wav file. Hit transcribe to transcribe all wav files collected into a single summary markdown document (with date stamp). Hit the Clean button to remove old wav files and transcripts.