Updated instructions for new .env settings

This commit is contained in:
Pat Wendorf 2024-11-29 09:08:24 -05:00
parent 80637cc26c
commit 2d329d4806
2 changed files with 5 additions and 5 deletions

View file

@ -14,9 +14,11 @@ pip install -r requirements.txt
Copy sample.env to .env and point your endpoint URLs for a working llama.cpp and whisper.cpp running in server/api mode.
## llama.cpp and whisper.cpp
## llama.cpp/ollama and whisper.cpp
These need to be running in server mode somewhere on your local machine or on your network. Make sure the PROMPT_FORMAT in your .env file matches exactly to what the LLM model expects.
These need to be running in server mode somewhere on your local machine or on your network. Add the endpoints to your .env
The default values are correct if you run whisper.cpp and ollama server or llama.cpp server on your local machine.
## Running

View file

@ -1,9 +1,7 @@
WHISPERCPP_URL="http://localhost:8088/inference"
LLAMACPP_URL="http://localhost:8080/completion"
LLAMACPP_URL="http://localhost:8080/v1"
SYSTEM_MESSAGE="You are a friendly chatbot that summarizes call transcripts"
SUMMARY_PROMPT="Call Transcript: {chunk}\n\nInstruction: Summarize the above call transcript but DO NOT MENTION THE TRANSCRIPT"
SENTIMENT_PROMPT="Call Transcript: {chunk}\n\nInstruction: Summarize the sentiment for topics in the above call transcript but DO NOT MENTION THE TRANSCRIPT"
PROMPT_FORMAT="<|im_start|>system\n{system}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
STOP_TOKEN="<|im_end|>"
CHUNK_SIZE=12288
TEMPERATURE=0.1