Pat Wendorf
|
9b8a6077bf
|
Switched bytes to tokens to be consistent with output. Allowed much larger outputs and inputs to support long context models.
|
2025-07-02 12:50:24 -04:00 |
|
Pat Wendorf
|
6e2fa21b62
|
Removed temp and top_p, that should be controlled by the hosted model now.
|
2025-05-13 08:38:31 -04:00 |
|
Pat Wendorf (aider)
|
aec8919bf6
|
fix: join paths when deleting files in clean()
|
2025-05-09 11:35:35 -04:00 |
|
Pat Wendorf (aider)
|
a2a3f900d2
|
fix: join output dir path when opening transcript files
|
2025-05-09 11:34:23 -04:00 |
|
Pat Wendorf (aider)
|
eca331e773
|
fix: use full path when checking for existing transcripts
|
2025-05-09 11:31:16 -04:00 |
|
Pat Wendorf (aider)
|
d9de639553
|
fix: use full paths for wav file operations
|
2025-05-09 11:26:45 -04:00 |
|
Pat Wendorf
|
23d2e3715c
|
Line spacing in config dialog was too much
|
2025-05-09 10:56:04 -04:00 |
|
Pat Wendorf (aider)
|
60215cc0c4
|
feat: add configurable output directory for transcripts
|
2025-05-09 10:51:15 -04:00 |
|
Pat Wendorf
|
b90b651f14
|
Forgot the v1 endpoint for llama.cpp/ollama
|
2025-05-09 10:49:11 -04:00 |
|
Pat Wendorf (aider)
|
b754738146
|
feat: replace input fields with text areas in config dialog
|
2025-05-09 10:48:22 -04:00 |
|
Pat Wendorf (aider)
|
73279657da
|
feat: increase settings dialog size and field heights
|
2025-05-09 10:46:00 -04:00 |
|
Pat Wendorf
|
f1fb9f0a2b
|
Removed the old sample.env file and updated readme.
|
2025-05-09 10:44:52 -04:00 |
|
Pat Wendorf
|
80101d577c
|
More refined prompts
|
2025-05-09 10:41:49 -04:00 |
|
Pat Wendorf
|
150d910ddb
|
more better defaults
|
2025-05-09 10:15:47 -04:00 |
|
Pat Wendorf
|
40de47afe2
|
More reasonable defaults from the working system
|
2025-05-09 10:07:29 -04:00 |
|
Pat Wendorf (aider)
|
08b2a2b627
|
feat: add config dialog and move settings from .env to config.json
|
2025-05-09 09:55:03 -04:00 |
|
Pat Wendorf (aider)
|
b9963599e3
|
docs: add macOS homebrew requirements to README
|
2025-05-09 09:47:16 -04:00 |
|
Pat Wendorf
|
2fe4893296
|
After merging summarize into meetings removed meetings.py
|
2025-05-09 09:37:26 -04:00 |
|
Pat Wendorf (aider)
|
04659ed489
|
refactor: merge summarize.py functionality into meetings.py
|
2025-05-09 09:30:36 -04:00 |
|
Pat Wendorf
|
b9533bf503
|
Sample env was missing the fact prompt
|
2025-02-20 19:18:15 -05:00 |
|
Pat Wendorf
|
70a8816e8b
|
Forgot to update sample file.
|
2025-02-08 08:54:58 -05:00 |
|
Pat Wendorf
|
c947e08cdc
|
Allow for adjustment of temp, top_p and max tokens for models that have issues with repeating.
|
2025-02-08 08:54:29 -05:00 |
|
Pat Wendorf
|
933b9e24d4
|
Skip transcribe step if it's already done. Show each part being summarized.
|
2025-02-03 16:30:52 -05:00 |
|
Pat Wendorf
|
2d329d4806
|
Updated instructions for new .env settings
|
2024-11-29 09:08:24 -05:00 |
|
Pat Wendorf
|
80637cc26c
|
Simplified the transcriber by using the openai library to access llama.cpp server or ollama server mode.
|
2024-11-29 09:06:22 -05:00 |
|
Pat Wendorf
|
77b7f440a0
|
Replaced the screenshot to show the new recording device selector
|
2024-08-19 08:33:29 -04:00 |
|
Pat Wendorf
|
6128f02615
|
Added drop down for selecting the recording device.
|
2024-08-19 08:32:24 -04:00 |
|
Pat Wendorf
|
ac27eaf826
|
Treat the summarizer as a module now, and call it with threading instead of subprocess
|
2024-08-19 07:55:46 -04:00 |
|
Pat Wendorf
|
3c05cd78dd
|
Used mistral large to re-write the summarize process.
|
2024-08-19 07:41:32 -04:00 |
|
Pat Wendorf
|
81e4e165bc
|
Added some silence trimming
|
2024-07-30 11:56:40 -04:00 |
|
Pat Wendorf
|
f5af4960fc
|
Extract some fact based summaries now as well
|
2024-06-24 09:52:38 -04:00 |
|
Pat Wendorf
|
cb00762378
|
UI looks a bit better now
|
2024-06-24 09:38:54 -04:00 |
|
Pat Wendorf
|
fe50d40fce
|
Mention you need llama.cpp and whisper.cpp running in server mode
|
2024-06-21 12:56:41 -04:00 |
|
Pat Wendorf
|
f8c7d09a59
|
Accidentally removed MD files
|
2024-06-21 12:53:08 -04:00 |
|
Pat Wendorf
|
69d269d2f0
|
first commit
|
2024-06-21 12:52:10 -04:00 |
|