diff --git a/README.md b/README.md index d2af00f..a23bd37 100644 --- a/README.md +++ b/README.md @@ -36,6 +36,11 @@ Get Cited answers just like Perplexity. Works Flawlessly with Ollama local LLMs. #### 🏠 **Self Hostable** Open source and easy to deploy locally. +#### 🎙️ Podcasts +- Blazingly fast podcast generation agent. +- Convert your chat conversations into engaging audio content +- Support for multiple TTS providers (OpenAI, Azure, Google Vertex AI) + #### 📊 **Advanced RAG Techniques** - Supports 150+ LLM's - Supports 6000+ Embedding Models. @@ -58,12 +63,6 @@ Open source and easy to deploy locally. - Its main usecase is to save any webpages protected beyond authentication. -### 2. Temporarily Deprecated - -#### Podcasts -- The SurfSense Podcast feature is currently being reworked for better UI and stability. Expect it soon. - - ## FEATURE REQUESTS AND FUTURE diff --git a/surfsense_web/content/docs/docker-installation.mdx b/surfsense_web/content/docs/docker-installation.mdx index 2363665..47053c9 100644 --- a/surfsense_web/content/docs/docker-installation.mdx +++ b/surfsense_web/content/docs/docker-installation.mdx @@ -73,6 +73,7 @@ Before you begin, ensure you have: | LONG_CONTEXT_LLM | LiteLLM routed LLM for longer context windows (e.g., `gemini/gemini-2.0-flash`, `ollama/deepseek-r1:8b`) | | UNSTRUCTURED_API_KEY | API key for Unstructured.io service for document parsing | | FIRECRAWL_API_KEY | API key for Firecrawl service for web crawling | + | TTS_SERVICE | Text-to-Speech API provider for Podcasts (e.g., `openai/tts-1`, `azure/neural`, `vertex_ai/`). See [supported providers](https://docs.litellm.ai/docs/text_to_speech#supported-providers) | Include API keys for the LLM providers you're using. For example: - `OPENAI_API_KEY`: If using OpenAI models diff --git a/surfsense_web/content/docs/manual-installation.mdx b/surfsense_web/content/docs/manual-installation.mdx index 3813b1b..b1fed6a 100644 --- a/surfsense_web/content/docs/manual-installation.mdx +++ b/surfsense_web/content/docs/manual-installation.mdx @@ -61,6 +61,7 @@ Edit the `.env` file and set the following variables: | LONG_CONTEXT_LLM | LiteLLM routed long-context LLM (e.g., `gemini/gemini-2.0-flash`, `ollama/deepseek-r1:8b`) | | UNSTRUCTURED_API_KEY | API key for Unstructured.io service | | FIRECRAWL_API_KEY | API key for Firecrawl service (if using crawler) | +| TTS_SERVICE | Text-to-Speech API provider for Podcasts (e.g., `openai/tts-1`, `azure/neural`, `vertex_ai/`). See [supported providers](https://docs.litellm.ai/docs/text_to_speech#supported-providers) | **Important**: Since LLM calls are routed through LiteLLM, include API keys for the LLM providers you're using: - For OpenAI models: `OPENAI_API_KEY`