* Initial implementation of Ollama as a service
* Fix model selector in tool window
* Enable image attachment
* Rewrite OllamaSettingsForm in Kt
* Create OllamaInlineCompletionModel and use it for building completion template
* Add support for blocking code completion on models that we don't know support it
* Allow disabling code completion settings
* Disable code completion settings when an unsupported model is entered
* Track FIM template in settings as a derived state
* Update llm-client
* Initial implementation of model combo box
* Add Ollama icon and display models as list
* Make OllamaSettingsState immutable & convert OllamaSettings to Kotlin
* Add refresh models button
* Distinguish between empty/needs refresh/loading
* Avoid storing any model if the combo box is empty
* Fix icon size
* Back to mutable settings
There were some bugs with immutable settings
* Store available models in settings state
* Expose available models in model dropdown
* Add dark icon
* Cleanups for CompletionRequestProvider
* Fix checkstyle issues
* refactor: migrate to SimplePersistentStateComponent
* fix: add code completion stop tokens
* fix: display only one item in the model popup action group
* fix: add back multi model selection
---------
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* Initial implementation of integrating llama.cpp to run LLaMA models locally
* Move submodule
* Copy llama submodule to bundle
* Support for downloading models from IDE
* Code cleanup
* Store port field
* Replace service selection radio group with dropdown
* Add quantization support + other fixes
* Add option to override host
* Fix override host handler
* Disable port field when override host enabled
* Design updates
* Fix llama settings configuration, design changes, clean up code
* Improve You.com coupon design
* Add new Phind model and help tooltip
* Fetch you.com subscription
* Add CodeBooga model, fix downloadable model selection
* Chat history support
* Code refactoring, minor bug fixes
* UI updates, several bug fixes, removed code llama python model
* Code cleanup, enable llama port only on macOS
* Change downloaded gguf models path
* Move some of the labels to codegpt bundle
* Minor fixes
* Remove ToRA model, add help texts
* Fix test
* Modify description
* Ability to configure custom service
* Add example preset templates, rename module
* Custom service client impl
* Add YOU API integration
* Remove/ignore generated antlr classes
* Remove text completion models(deprecated)
* Remove unused code, fix settings state sync
* Display model name/icon in the tool window
* Update chat history UI
* Fix model/service sync
* Clear plugin state
* Fix minor bugs, add settings sync tests
* UI changes
* Separate model configuration
* Add support for overriding the completion path
* Update Find Bugs prompt