mirror of
https://github.com/TheBlewish/Automated-AI-Web-Researcher-Ollama.git
synced 2025-01-19 00:47:46 +00:00
commit
cac9c5f210
32
README.md
32
README.md
|
@ -40,20 +40,23 @@ The key distinction is that this isn't just a chatbot - it's an automated resear
|
||||||
|
|
||||||
1. Clone the repository:
|
1. Clone the repository:
|
||||||
|
|
||||||
|
```sh
|
||||||
git clone https://github.com/TheBlewish/Automated-AI-Web-Researcher-Ollama
|
git clone https://github.com/TheBlewish/Automated-AI-Web-Researcher-Ollama
|
||||||
cd Automated-AI-Web-Researcher-Ollama
|
cd Automated-AI-Web-Researcher-Ollama
|
||||||
|
```
|
||||||
|
|
||||||
2. Create and activate a virtual environment:
|
2. Create and activate a virtual environment:
|
||||||
|
|
||||||
|
```sh
|
||||||
python -m venv venv
|
python -m venv venv
|
||||||
source venv/bin/activate # On Windows, use venv\Scripts\activate
|
source venv/bin/activate # On Windows, use venv\Scripts\activate
|
||||||
|
```
|
||||||
|
|
||||||
3. Install dependencies:
|
3. Install dependencies:
|
||||||
|
|
||||||
|
```sh
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
4. Install and Configure Ollama:
|
4. Install and Configure Ollama:
|
||||||
- Install Ollama following instructions at https://ollama.ai
|
- Install Ollama following instructions at https://ollama.ai
|
||||||
|
@ -62,16 +65,19 @@ pip install -r requirements.txt
|
||||||
|
|
||||||
Create a file named `modelfile` with these exact contents:
|
Create a file named `modelfile` with these exact contents:
|
||||||
|
|
||||||
|
```
|
||||||
FROM your-model-name
|
FROM your-model-name
|
||||||
|
|
||||||
PARAMETER num_ctx 38000
|
PARAMETER num_ctx 38000
|
||||||
|
```
|
||||||
|
|
||||||
Replace "your-model-name" with your chosen model (e.g., phi3:3.8b-mini-128k-instruct).
|
Replace "your-model-name" with your chosen model (e.g., phi3:3.8b-mini-128k-instruct).
|
||||||
|
|
||||||
Then create the model:
|
Then create the model:
|
||||||
|
|
||||||
|
```sh
|
||||||
ollama create research-phi3 -f modelfile
|
ollama create research-phi3 -f modelfile
|
||||||
|
```
|
||||||
|
|
||||||
Note: This specific configuration is necessary as recent Ollama versions have reduced context windows on models like phi3:3.8b-mini-128k-instruct despite the name suggesing high context which is why the modelfile step is necessary due to the high amount of information being used during the research process.
|
Note: This specific configuration is necessary as recent Ollama versions have reduced context windows on models like phi3:3.8b-mini-128k-instruct despite the name suggesing high context which is why the modelfile step is necessary due to the high amount of information being used during the research process.
|
||||||
|
|
||||||
|
@ -79,24 +85,26 @@ Note: This specific configuration is necessary as recent Ollama versions have re
|
||||||
|
|
||||||
1. Start Ollama:
|
1. Start Ollama:
|
||||||
|
|
||||||
|
```sh
|
||||||
ollama serve
|
ollama serve
|
||||||
|
```
|
||||||
|
|
||||||
2. Run the researcher:
|
2. Run the researcher:
|
||||||
|
|
||||||
|
```sh
|
||||||
python Web-LLM.py
|
python Web-LLM.py
|
||||||
|
```
|
||||||
|
|
||||||
3. Start a research session:
|
3. Start a research session:
|
||||||
- Type @ followed by your research query
|
- Type `@` followed by your research query
|
||||||
- Press CTRL+D to submit
|
- Press CTRL+D to submit
|
||||||
- Example: "@What year is global population projected to start declining?"
|
- Example: `@What year is global population projected to start declining?`
|
||||||
|
|
||||||
4. During research you can use the following commands by typing the letter associated with each and submitting with CTRL+D:
|
4. During research you can use the following commands by typing the letter associated with each and submitting with CTRL+D:
|
||||||
- Use 's' to show status.
|
- Use `s` to show status.
|
||||||
- Use 'f' to show current focus.
|
- Use `f` to show current focus.
|
||||||
- Use 'p' to pause and assess research progress, which will give you an assessment from the LLM after reviewing the entire research content whether it can answer your query or not with the content it has so far collected, then it waits for you to input one of two commands, 'c' to continue with the research or 'q' to terminate it which will result in a summary like if you terminated it without using the pause feature.
|
- Use `p` to pause and assess research progress, which will give you an assessment from the LLM after reviewing the entire research content whether it can answer your query or not with the content it has so far collected, then it waits for you to input one of two commands, `c` to continue with the research or `q` to terminate it which will result in a summary like if you terminated it without using the pause feature.
|
||||||
- Use 'q' to quit research.
|
- Use `q` to quit research.
|
||||||
|
|
||||||
5. After research completes:
|
5. After research completes:
|
||||||
- Wait for the summary to be generated, and review the LLM's findings.
|
- Wait for the summary to be generated, and review the LLM's findings.
|
||||||
|
|
Loading…
Reference in a new issue