Commit graph

  • b9533bf503 Sample env was missing the fact prompt main Pat Wendorf 2025-02-20 19:18:15 -05:00
  • 70a8816e8b Forgot to update sample file. Pat Wendorf 2025-02-08 08:54:58 -05:00
  • c947e08cdc Allow for adjustment of temp, top_p and max tokens for models that have issues with repeating. Pat Wendorf 2025-02-08 08:54:29 -05:00
  • 933b9e24d4 Skip transcribe step if it's already done. Show each part being summarized. Pat Wendorf 2025-02-03 16:30:52 -05:00
  • 2d329d4806 Updated instructions for new .env settings Pat Wendorf 2024-11-29 09:08:24 -05:00
  • 80637cc26c Simplified the transcriber by using the openai library to access llama.cpp server or ollama server mode. Pat Wendorf 2024-11-29 09:06:22 -05:00
  • 77b7f440a0 Replaced the screenshot to show the new recording device selector Pat Wendorf 2024-08-19 08:33:29 -04:00
  • 6128f02615 Added drop down for selecting the recording device. Pat Wendorf 2024-08-19 08:32:24 -04:00
  • ac27eaf826 Treat the summarizer as a module now, and call it with threading instead of subprocess Pat Wendorf 2024-08-19 07:55:46 -04:00
  • 3c05cd78dd Used mistral large to re-write the summarize process. Pat Wendorf 2024-08-19 07:41:32 -04:00
  • 81e4e165bc Added some silence trimming Pat Wendorf 2024-07-30 11:56:40 -04:00
  • f5af4960fc Extract some fact based summaries now as well Pat Wendorf 2024-06-24 09:52:38 -04:00
  • cb00762378 UI looks a bit better now Pat Wendorf 2024-06-24 09:38:54 -04:00
  • fe50d40fce Mention you need llama.cpp and whisper.cpp running in server mode Pat Wendorf 2024-06-21 12:56:41 -04:00
  • f8c7d09a59 Accidentally removed MD files Pat Wendorf 2024-06-21 12:53:08 -04:00
  • 69d269d2f0 first commit Pat Wendorf 2024-06-21 12:52:10 -04:00