* chore(deps): Update and centralize dependencies
* Update treesitter to 0.22.2
* Update kotlin to 1.9.23
* Update jackson to 2.17.0
* Update gradle-intellij-plugin to 1.17.3
* Update gradle to 8.7
* Use BOMs where possible
* Centralize dependencies in version catalog
* Allow Dependabot to update other modules (add treesitter and buildSrc/src/main/kotlin, remove core)
* fix: preload credentials only once for all headers
* feat: add OpenAI and Claude vision support
* refactor: replace awaitility with PlatformTestUtil.waitWithEventsDispatching
* feat: display error when image not found
* chore: bump llm-client
* feat: configurable file watcher and minor code cleanup
* fix: ensure image notifications are triggered only for image file types
* docs: update changelog
* fix: user textarea icon button behaviour
* refactor: minor cleanup
* Implement support for You Pro modes: Default, Agent, Custom with various 3rd party models and Research
* Update list of You modes/models depending on user having subscription
* add default value for chatMode
* Add first draft of inline code completion with mock text
* Adds InsertInlineTextAction for inserting autocomplete suggestion with tab
- Changed to disable suggestions when text is selected
- Adds and removes the insert action based on when it shows the inlay hint
* Request inline code completion
* Move inline completion prompt into txt file
* Add inline completion settings to ConfigurationState
* Fix code style
* Use EditorTrackerListener instead of EditorFactoryListener to enable inline completion
* Code completion requests synchronously without SSE
* Use LlamaClient.getInfill() for inline code completion
* support inlay block element rendering, clean up code
* Use only enclosed Method or Class contents for code completion if possible
* Refactor extracting PsiElement contents in code completion
* bump llm-client
* fix completion call from triggering on EDT, force method params to be nonnull by default
* refactor request building, decrease delay value
* Trigger code completion if cursor is not inside a word
* Improve inlay rendering
* Support cancellable infill requests
* add statusbar widget, disable completions by default
* Show error notification if code completion failed
* Truely disable/enable EditorInlayHandler when completion is turned off/on
* Add CodeCompletionEnabledListener Topic to control enabling/disabling code-completion
* Add progress indicator for code-completion with option to cancel
* Add CodeCompletionServiceTest + refactor inlay ElementRenderers
* several improvements
- replace timer implementation with call debouncing
- use OpenAI /v1/completions API for completions
- code refactoring
* trigger progress indicator only for llama completions
* fix tests
---------
Co-authored-by: James Higgins <james.isaac.higgins@gmail.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* Add support for some extended parameters of llama.cpp(top_k, top_p, min_p, and repeat_penalty)
Added 'top_k,' 'top_p,' 'min_p,' and 'repeat_penalty' fields to the llama.cpp request configuration. The default values for these fields match the defaults of llama.cpp. If left untouched, they do not affect the model's response to the request.
* Bump llm-client
---------
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* Initial implementation of integrating llama.cpp to run LLaMA models locally
* Move submodule
* Copy llama submodule to bundle
* Support for downloading models from IDE
* Code cleanup
* Store port field
* Replace service selection radio group with dropdown
* Add quantization support + other fixes
* Add option to override host
* Fix override host handler
* Disable port field when override host enabled
* Design updates
* Fix llama settings configuration, design changes, clean up code
* Improve You.com coupon design
* Add new Phind model and help tooltip
* Fetch you.com subscription
* Add CodeBooga model, fix downloadable model selection
* Chat history support
* Code refactoring, minor bug fixes
* UI updates, several bug fixes, removed code llama python model
* Code cleanup, enable llama port only on macOS
* Change downloaded gguf models path
* Move some of the labels to codegpt bundle
* Minor fixes
* Remove ToRA model, add help texts
* Fix test
* Modify description
* Free GPT4 for a month to try
* Free GPT4 for a month to try
* Better tooltip
* Replace toggle component with checkbox and other minor ui improvements
* Add UTM and userId params to You.com completion request
* Fix#145 - web serach results not being displayed despite the flag
---------
Co-authored-by: siilats <keith@siilats.com>
* Ability to configure custom service
* Add example preset templates, rename module
* Custom service client impl
* Add YOU API integration
* Remove/ignore generated antlr classes
* Remove text completion models(deprecated)
* Remove unused code, fix settings state sync
* Display model name/icon in the tool window
* Update chat history UI
* Fix model/service sync
* Clear plugin state
* Fix minor bugs, add settings sync tests
* UI changes
* Separate model configuration
* Add support for overriding the completion path
* Update Find Bugs prompt