Pat Wendorf
|
b9533bf503
|
Sample env was missing the fact prompt
|
2025-02-20 19:18:15 -05:00 |
|
Pat Wendorf
|
70a8816e8b
|
Forgot to update sample file.
|
2025-02-08 08:54:58 -05:00 |
|
Pat Wendorf
|
c947e08cdc
|
Allow for adjustment of temp, top_p and max tokens for models that have issues with repeating.
|
2025-02-08 08:54:29 -05:00 |
|
Pat Wendorf
|
933b9e24d4
|
Skip transcribe step if it's already done. Show each part being summarized.
|
2025-02-03 16:30:52 -05:00 |
|
Pat Wendorf
|
2d329d4806
|
Updated instructions for new .env settings
|
2024-11-29 09:08:24 -05:00 |
|
Pat Wendorf
|
80637cc26c
|
Simplified the transcriber by using the openai library to access llama.cpp server or ollama server mode.
|
2024-11-29 09:06:22 -05:00 |
|
Pat Wendorf
|
77b7f440a0
|
Replaced the screenshot to show the new recording device selector
|
2024-08-19 08:33:29 -04:00 |
|
Pat Wendorf
|
6128f02615
|
Added drop down for selecting the recording device.
|
2024-08-19 08:32:24 -04:00 |
|
Pat Wendorf
|
ac27eaf826
|
Treat the summarizer as a module now, and call it with threading instead of subprocess
|
2024-08-19 07:55:46 -04:00 |
|
Pat Wendorf
|
3c05cd78dd
|
Used mistral large to re-write the summarize process.
|
2024-08-19 07:41:32 -04:00 |
|
Pat Wendorf
|
81e4e165bc
|
Added some silence trimming
|
2024-07-30 11:56:40 -04:00 |
|
Pat Wendorf
|
f5af4960fc
|
Extract some fact based summaries now as well
|
2024-06-24 09:52:38 -04:00 |
|
Pat Wendorf
|
cb00762378
|
UI looks a bit better now
|
2024-06-24 09:38:54 -04:00 |
|
Pat Wendorf
|
fe50d40fce
|
Mention you need llama.cpp and whisper.cpp running in server mode
|
2024-06-21 12:56:41 -04:00 |
|
Pat Wendorf
|
f8c7d09a59
|
Accidentally removed MD files
|
2024-06-21 12:53:08 -04:00 |
|
Pat Wendorf
|
69d269d2f0
|
first commit
|
2024-06-21 12:52:10 -04:00 |
|