* feat: add OpenAI and Claude vision support
* refactor: replace awaitility with PlatformTestUtil.waitWithEventsDispatching
* feat: display error when image not found
* chore: bump llm-client
* feat: configurable file watcher and minor code cleanup
* fix: ensure image notifications are triggered only for image file types
* docs: update changelog
* fix: user textarea icon button behaviour
* refactor: minor cleanup
* Add first draft of inline code completion with mock text
* Adds InsertInlineTextAction for inserting autocomplete suggestion with tab
- Changed to disable suggestions when text is selected
- Adds and removes the insert action based on when it shows the inlay hint
* Request inline code completion
* Move inline completion prompt into txt file
* Add inline completion settings to ConfigurationState
* Fix code style
* Use EditorTrackerListener instead of EditorFactoryListener to enable inline completion
* Code completion requests synchronously without SSE
* Use LlamaClient.getInfill() for inline code completion
* support inlay block element rendering, clean up code
* Use only enclosed Method or Class contents for code completion if possible
* Refactor extracting PsiElement contents in code completion
* bump llm-client
* fix completion call from triggering on EDT, force method params to be nonnull by default
* refactor request building, decrease delay value
* Trigger code completion if cursor is not inside a word
* Improve inlay rendering
* Support cancellable infill requests
* add statusbar widget, disable completions by default
* Show error notification if code completion failed
* Truely disable/enable EditorInlayHandler when completion is turned off/on
* Add CodeCompletionEnabledListener Topic to control enabling/disabling code-completion
* Add progress indicator for code-completion with option to cancel
* Add CodeCompletionServiceTest + refactor inlay ElementRenderers
* several improvements
- replace timer implementation with call debouncing
- use OpenAI /v1/completions API for completions
- code refactoring
* trigger progress indicator only for llama completions
* fix tests
---------
Co-authored-by: James Higgins <james.isaac.higgins@gmail.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* Add support for some extended parameters of llama.cpp(top_k, top_p, min_p, and repeat_penalty)
Added 'top_k,' 'top_p,' 'min_p,' and 'repeat_penalty' fields to the llama.cpp request configuration. The default values for these fields match the defaults of llama.cpp. If left untouched, they do not affect the model's response to the request.
* Bump llm-client
---------
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* Initial implementation
* Refactor UI related classes and organize imports
* Display selected files notification, include the files in the prompt
* feat: store referenced file paths in the messate state
* feat: add selected files accordion
* feat: update UI
* feat: improve file selection
* feat: support prompt template configuration
* fix: token calculation for virtualfile checkbox tree
* refactor: clean up
* refactor: move labels/descriptions to bundle
* adds: configuration for the commit-message system prompt
this will remove the default file and move it to the code to be overwritten if the user chooses to modify the prompt.
* fix: checkstyle
---------
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* Initial implementation of integrating llama.cpp to run LLaMA models locally
* Move submodule
* Copy llama submodule to bundle
* Support for downloading models from IDE
* Code cleanup
* Store port field
* Replace service selection radio group with dropdown
* Add quantization support + other fixes
* Add option to override host
* Fix override host handler
* Disable port field when override host enabled
* Design updates
* Fix llama settings configuration, design changes, clean up code
* Improve You.com coupon design
* Add new Phind model and help tooltip
* Fetch you.com subscription
* Add CodeBooga model, fix downloadable model selection
* Chat history support
* Code refactoring, minor bug fixes
* UI updates, several bug fixes, removed code llama python model
* Code cleanup, enable llama port only on macOS
* Change downloaded gguf models path
* Move some of the labels to codegpt bundle
* Minor fixes
* Remove ToRA model, add help texts
* Fix test
* Modify description
* Ability to configure custom service
* Add example preset templates, rename module
* Custom service client impl
* Add YOU API integration
* Remove/ignore generated antlr classes
* Remove text completion models(deprecated)
* Remove unused code, fix settings state sync
* Display model name/icon in the tool window
* Update chat history UI
* Fix model/service sync
* Clear plugin state
* Fix minor bugs, add settings sync tests
* UI changes
* Separate model configuration
* Add support for overriding the completion path
* Update Find Bugs prompt