From b148457f7f845d0df430631f4bf01f85137ae320 Mon Sep 17 00:00:00 2001 From: Ikko Eltociear Ashimine Date: Wed, 20 Nov 2024 22:47:03 +0900 Subject: [PATCH 1/6] docs: update README.md proggrammer -> programmer --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index fe96b79..93997ca 100644 --- a/README.md +++ b/README.md @@ -130,7 +130,7 @@ This project is licensed under the MIT License - see the [LICENSE] file for deta - DuckDuckGo for their search API ## Personal Note -This tool represents an attempt to bridge the gap between simple LLM interactions and genuine research capabilities. By structuring the research process and maintaining documentation, it aims to provide more thorough and verifiable results than traditional LLM conversations. It also represents an attempt to improve on my previous project 'Web-LLM-Assistant-Llamacpp-Ollama' which simply gave LLM's the ability to search and scrape websites to answer questions. This new program, unlike it's predecessor I feel thos program takes that capability and uses it in a novel and actually very useful way, I feel that it is the most advanced and useful way I could conceive of building on my previous program, as a very new proggrammer this being my second ever program I feel very good about the result, I hope that it hits the mark! +This tool represents an attempt to bridge the gap between simple LLM interactions and genuine research capabilities. By structuring the research process and maintaining documentation, it aims to provide more thorough and verifiable results than traditional LLM conversations. It also represents an attempt to improve on my previous project 'Web-LLM-Assistant-Llamacpp-Ollama' which simply gave LLM's the ability to search and scrape websites to answer questions. This new program, unlike it's predecessor I feel thos program takes that capability and uses it in a novel and actually very useful way, I feel that it is the most advanced and useful way I could conceive of building on my previous program, as a very new programmer this being my second ever program I feel very good about the result, I hope that it hits the mark! Given how much I have now been using it myself, unlike the previous program which felt more like a novelty then an actual tool, this is actually quite useful and unique, but I am quite biased! Please enjoy! and feel free to submit any suggestions for improvements, so that we can make this automated AI researcher even more capable. From 9afa2ce945c667cf5d04d350669233c21273c5d0 Mon Sep 17 00:00:00 2001 From: Martin Mauch Date: Wed, 20 Nov 2024 15:12:00 +0100 Subject: [PATCH 2/6] Use codeblocks in README --- README.md | 32 ++++++++++++++++++++------------ 1 file changed, 20 insertions(+), 12 deletions(-) diff --git a/README.md b/README.md index fe96b79..8b94703 100644 --- a/README.md +++ b/README.md @@ -40,20 +40,23 @@ The key distinction is that this isn't just a chatbot - it's an automated resear 1. Clone the repository: +```sh git clone https://github.com/TheBlewish/Automated-AI-Web-Researcher-Ollama cd Automated-AI-Web-Researcher-Ollama - +``` 2. Create and activate a virtual environment: +```sh python -m venv venv source venv/bin/activate # On Windows, use venv\Scripts\activate - +``` 3. Install dependencies: +```sh pip install -r requirements.txt - +``` 4. Install and Configure Ollama: - Install Ollama following instructions at https://ollama.ai @@ -62,16 +65,19 @@ pip install -r requirements.txt Create a file named `modelfile` with these exact contents: +``` FROM your-model-name PARAMETER num_ctx 38000 +``` Replace "your-model-name" with your chosen model (e.g., phi3:3.8b-mini-128k-instruct). Then create the model: +```sh ollama create research-phi3 -f modelfile - +``` Note: This specific configuration is necessary as recent Ollama versions have reduced context windows on models like phi3:3.8b-mini-128k-instruct despite the name suggesing high context which is why the modelfile step is necessary due to the high amount of information being used during the research process. @@ -79,24 +85,26 @@ Note: This specific configuration is necessary as recent Ollama versions have re 1. Start Ollama: +```sh ollama serve - +``` 2. Run the researcher: +```sh python Web-LLM.py - +``` 3. Start a research session: -- Type @ followed by your research query +- Type `@` followed by your research query - Press CTRL+D to submit -- Example: "@What year is global population projected to start declining?" +- Example: `@What year is global population projected to start declining?` 4. During research you can use the following commands by typing the letter associated with each and submitting with CTRL+D: -- Use 's' to show status. -- Use 'f' to show current focus. -- Use 'p' to pause and assess research progress, which will give you an assessment from the LLM after reviewing the entire research content whether it can answer your query or not with the content it has so far collected, then it waits for you to input one of two commands, 'c' to continue with the research or 'q' to terminate it which will result in a summary like if you terminated it without using the pause feature. -- Use 'q' to quit research. +- Use `s` to show status. +- Use `f` to show current focus. +- Use `p` to pause and assess research progress, which will give you an assessment from the LLM after reviewing the entire research content whether it can answer your query or not with the content it has so far collected, then it waits for you to input one of two commands, `c` to continue with the research or `q` to terminate it which will result in a summary like if you terminated it without using the pause feature. +- Use `q` to quit research. 5. After research completes: - Wait for the summary to be generated, and review the LLM's findings. From c3c29a5da5d15dafbe1d4520f2a6527e60ffd3d3 Mon Sep 17 00:00:00 2001 From: James Date: Thu, 21 Nov 2024 01:52:34 +1000 Subject: [PATCH 3/6] Update Web-LLM.py --- Web-LLM.py | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff --git a/Web-LLM.py b/Web-LLM.py index dd3a445..616e38f 100644 --- a/Web-LLM.py +++ b/Web-LLM.py @@ -61,16 +61,13 @@ def print_header(): print(Fore.YELLOW + """ Welcome to the Advanced Research Assistant! - Commands: - - For web search: start message with '/' - Example: "/latest news on AI advancements" - - - For research mode: start message with '@' + Usage: + - Start your research query with '@' Example: "@analyze the impact of AI on healthcare" Press CTRL+D (Linux/Mac) or CTRL+Z (Windows) to submit input. """ + Style.RESET_ALL) - + def get_multiline_input() -> str: """Get multiline input using raw terminal mode for reliable CTRL+D handling""" print(f"{Fore.GREEN}📝 Enter your message (Press CTRL+D to submit):{Style.RESET_ALL}") From e3cb357c3b1ddd1d225e087e99dbf3fa3cf40e93 Mon Sep 17 00:00:00 2001 From: James Date: Thu, 21 Nov 2024 13:36:25 +1000 Subject: [PATCH 4/6] Update llm_wrapper.py --- llm_wrapper.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/llm_wrapper.py b/llm_wrapper.py index f8b97c0..5568952 100644 --- a/llm_wrapper.py +++ b/llm_wrapper.py @@ -44,7 +44,7 @@ class LLMWrapper: 'top_p': kwargs.get('top_p', self.llm_config.get('top_p', 0.9)), 'stop': kwargs.get('stop', self.llm_config.get('stop', [])), 'num_predict': kwargs.get('max_tokens', self.llm_config.get('max_tokens', 55000)), - 'context_length': self.llm_config.get('n_ctx', 55000) + 'num_ctx': self.llm_config.get('n_ctx', 55000) } } response = requests.post(url, json=data, stream=True) From 5db6761f3e6ee32da525d238a7019d96c8ead9b8 Mon Sep 17 00:00:00 2001 From: Burke Johnson <185158560+synth-mania@users.noreply.github.com> Date: Thu, 21 Nov 2024 11:43:18 -0600 Subject: [PATCH 5/6] add .gitignore --- .gitignore | 4 ++++ 1 file changed, 4 insertions(+) create mode 100644 .gitignore diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..faf3289 --- /dev/null +++ b/.gitignore @@ -0,0 +1,4 @@ +__pycache__ +logs +modelfile +research_session_* \ No newline at end of file From dbcee821a52fb610c6afed34a8165ce9527899f8 Mon Sep 17 00:00:00 2001 From: Burke Johnson <185158560+synth-mania@users.noreply.github.com> Date: Thu, 21 Nov 2024 11:43:51 -0600 Subject: [PATCH 6/6] add venv to gitignore --- .gitignore | 1 + 1 file changed, 1 insertion(+) diff --git a/.gitignore b/.gitignore index faf3289..6cf101a 100644 --- a/.gitignore +++ b/.gitignore @@ -1,4 +1,5 @@ __pycache__ +venv logs modelfile research_session_* \ No newline at end of file