Pat Wendorf
|
c947e08cdc
|
Allow for adjustment of temp, top_p and max tokens for models that have issues with repeating.
|
2025-02-08 08:54:29 -05:00 |
|
Pat Wendorf
|
933b9e24d4
|
Skip transcribe step if it's already done. Show each part being summarized.
|
2025-02-03 16:30:52 -05:00 |
|
Pat Wendorf
|
80637cc26c
|
Simplified the transcriber by using the openai library to access llama.cpp server or ollama server mode.
|
2024-11-29 09:06:22 -05:00 |
|
Pat Wendorf
|
3c05cd78dd
|
Used mistral large to re-write the summarize process.
|
2024-08-19 07:41:32 -04:00 |
|
Pat Wendorf
|
81e4e165bc
|
Added some silence trimming
|
2024-07-30 11:56:40 -04:00 |
|
Pat Wendorf
|
f5af4960fc
|
Extract some fact based summaries now as well
|
2024-06-24 09:52:38 -04:00 |
|
Pat Wendorf
|
69d269d2f0
|
first commit
|
2024-06-21 12:52:10 -04:00 |
|