* Add code completion setting states for custom service
* Add settings for code completion in Custom OpenAI service
* Move code completion section to the bottom
* Create test testFetchCodeCompletionCustomService
* Add Custom OpenAI to the "Enable/Disable Completion" actions
* New configuration UI separating /v1/chat/completions from /v1/completions
* Code completion for Custom Service
* Formatting fixes
* Move prefix and suffix to templates in body
* Message updates
* New tabbed UI for Chat and Code Completions
* convert to kotlin, improve ui and other minor changes
* fix test connection for chat completions
* add help tooltips
* allow backward compatibility
* support prefix and suffix placeholders
* fix initial state loading
---------
Co-authored-by: Jack Boswell (boswelja) <boswelja@outlook.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* Add first draft of inline code completion with mock text
* Adds InsertInlineTextAction for inserting autocomplete suggestion with tab
- Changed to disable suggestions when text is selected
- Adds and removes the insert action based on when it shows the inlay hint
* Request inline code completion
* Move inline completion prompt into txt file
* Add inline completion settings to ConfigurationState
* Fix code style
* Use EditorTrackerListener instead of EditorFactoryListener to enable inline completion
* Code completion requests synchronously without SSE
* Use LlamaClient.getInfill() for inline code completion
* support inlay block element rendering, clean up code
* Use only enclosed Method or Class contents for code completion if possible
* Refactor extracting PsiElement contents in code completion
* bump llm-client
* fix completion call from triggering on EDT, force method params to be nonnull by default
* refactor request building, decrease delay value
* Trigger code completion if cursor is not inside a word
* Improve inlay rendering
* Support cancellable infill requests
* add statusbar widget, disable completions by default
* Show error notification if code completion failed
* Truely disable/enable EditorInlayHandler when completion is turned off/on
* Add CodeCompletionEnabledListener Topic to control enabling/disabling code-completion
* Add progress indicator for code-completion with option to cancel
* Add CodeCompletionServiceTest + refactor inlay ElementRenderers
* several improvements
- replace timer implementation with call debouncing
- use OpenAI /v1/completions API for completions
- code refactoring
* trigger progress indicator only for llama completions
* fix tests
---------
Co-authored-by: James Higgins <james.isaac.higgins@gmail.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* Initial implementation of integrating llama.cpp to run LLaMA models locally
* Move submodule
* Copy llama submodule to bundle
* Support for downloading models from IDE
* Code cleanup
* Store port field
* Replace service selection radio group with dropdown
* Add quantization support + other fixes
* Add option to override host
* Fix override host handler
* Disable port field when override host enabled
* Design updates
* Fix llama settings configuration, design changes, clean up code
* Improve You.com coupon design
* Add new Phind model and help tooltip
* Fetch you.com subscription
* Add CodeBooga model, fix downloadable model selection
* Chat history support
* Code refactoring, minor bug fixes
* UI updates, several bug fixes, removed code llama python model
* Code cleanup, enable llama port only on macOS
* Change downloaded gguf models path
* Move some of the labels to codegpt bundle
* Minor fixes
* Remove ToRA model, add help texts
* Fix test
* Modify description