* Initial implementation of Ollama as a service
* Fix model selector in tool window
* Enable image attachment
* Rewrite OllamaSettingsForm in Kt
* Create OllamaInlineCompletionModel and use it for building completion template
* Add support for blocking code completion on models that we don't know support it
* Allow disabling code completion settings
* Disable code completion settings when an unsupported model is entered
* Track FIM template in settings as a derived state
* Update llm-client
* Initial implementation of model combo box
* Add Ollama icon and display models as list
* Make OllamaSettingsState immutable & convert OllamaSettings to Kotlin
* Add refresh models button
* Distinguish between empty/needs refresh/loading
* Avoid storing any model if the combo box is empty
* Fix icon size
* Back to mutable settings
There were some bugs with immutable settings
* Store available models in settings state
* Expose available models in model dropdown
* Add dark icon
* Cleanups for CompletionRequestProvider
* Fix checkstyle issues
* refactor: migrate to SimplePersistentStateComponent
* fix: add code completion stop tokens
* fix: display only one item in the model popup action group
* fix: add back multi model selection
---------
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* Add code completion setting states for custom service
* Add settings for code completion in Custom OpenAI service
* Move code completion section to the bottom
* Create test testFetchCodeCompletionCustomService
* Add Custom OpenAI to the "Enable/Disable Completion" actions
* New configuration UI separating /v1/chat/completions from /v1/completions
* Code completion for Custom Service
* Formatting fixes
* Move prefix and suffix to templates in body
* Message updates
* New tabbed UI for Chat and Code Completions
* convert to kotlin, improve ui and other minor changes
* fix test connection for chat completions
* add help tooltips
* allow backward compatibility
* support prefix and suffix placeholders
* fix initial state loading
---------
Co-authored-by: Jack Boswell (boswelja) <boswelja@outlook.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* fixes#432 adds support for Placeholders in Prompts
- activate gradle plugin Git4Idea
- adds PlaceholderUtil
- adds DATE_ISO_8601 PlaceholderReplacer
- adds BRANCH_NAME PlaceholderReplacer
* convert to kotlin, improve ui and add int. test
* fix: do not reuse projects from previous test runs
---------
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* chore(deps): Update and centralize dependencies
* Update treesitter to 0.22.2
* Update kotlin to 1.9.23
* Update jackson to 2.17.0
* Update gradle-intellij-plugin to 1.17.3
* Update gradle to 8.7
* Use BOMs where possible
* Centralize dependencies in version catalog
* Allow Dependabot to update other modules (add treesitter and buildSrc/src/main/kotlin, remove core)
* fix: preload credentials only once for all headers
* feat: add OpenAI and Claude vision support
* refactor: replace awaitility with PlatformTestUtil.waitWithEventsDispatching
* feat: display error when image not found
* chore: bump llm-client
* feat: configurable file watcher and minor code cleanup
* fix: ensure image notifications are triggered only for image file types
* docs: update changelog
* fix: user textarea icon button behaviour
* refactor: minor cleanup
* Implement support for You Pro modes: Default, Agent, Custom with various 3rd party models and Research
* Update list of You modes/models depending on user having subscription
* add default value for chatMode
* Add setting to use existing Llama server
* minor UI improvements
* support infill template configuration
---------
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* Add first draft of inline code completion with mock text
* Adds InsertInlineTextAction for inserting autocomplete suggestion with tab
- Changed to disable suggestions when text is selected
- Adds and removes the insert action based on when it shows the inlay hint
* Request inline code completion
* Move inline completion prompt into txt file
* Add inline completion settings to ConfigurationState
* Fix code style
* Use EditorTrackerListener instead of EditorFactoryListener to enable inline completion
* Code completion requests synchronously without SSE
* Use LlamaClient.getInfill() for inline code completion
* support inlay block element rendering, clean up code
* Use only enclosed Method or Class contents for code completion if possible
* Refactor extracting PsiElement contents in code completion
* bump llm-client
* fix completion call from triggering on EDT, force method params to be nonnull by default
* refactor request building, decrease delay value
* Trigger code completion if cursor is not inside a word
* Improve inlay rendering
* Support cancellable infill requests
* add statusbar widget, disable completions by default
* Show error notification if code completion failed
* Truely disable/enable EditorInlayHandler when completion is turned off/on
* Add CodeCompletionEnabledListener Topic to control enabling/disabling code-completion
* Add progress indicator for code-completion with option to cancel
* Add CodeCompletionServiceTest + refactor inlay ElementRenderers
* several improvements
- replace timer implementation with call debouncing
- use OpenAI /v1/completions API for completions
- code refactoring
* trigger progress indicator only for llama completions
* fix tests
---------
Co-authored-by: James Higgins <james.isaac.higgins@gmail.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* Add support for some extended parameters of llama.cpp(top_k, top_p, min_p, and repeat_penalty)
Added 'top_k,' 'top_p,' 'min_p,' and 'repeat_penalty' fields to the llama.cpp request configuration. The default values for these fields match the defaults of llama.cpp. If left untouched, they do not affect the model's response to the request.
* Bump llm-client
---------
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* adds: configuration for the commit-message system prompt
this will remove the default file and move it to the code to be overwritten if the user chooses to modify the prompt.
* fix: checkstyle
---------
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* Initial implementation of integrating llama.cpp to run LLaMA models locally
* Move submodule
* Copy llama submodule to bundle
* Support for downloading models from IDE
* Code cleanup
* Store port field
* Replace service selection radio group with dropdown
* Add quantization support + other fixes
* Add option to override host
* Fix override host handler
* Disable port field when override host enabled
* Design updates
* Fix llama settings configuration, design changes, clean up code
* Improve You.com coupon design
* Add new Phind model and help tooltip
* Fetch you.com subscription
* Add CodeBooga model, fix downloadable model selection
* Chat history support
* Code refactoring, minor bug fixes
* UI updates, several bug fixes, removed code llama python model
* Code cleanup, enable llama port only on macOS
* Change downloaded gguf models path
* Move some of the labels to codegpt bundle
* Minor fixes
* Remove ToRA model, add help texts
* Fix test
* Modify description
* Free GPT4 for a month to try
* Free GPT4 for a month to try
* Better tooltip
* Replace toggle component with checkbox and other minor ui improvements
* Add UTM and userId params to You.com completion request
* Fix#145 - web serach results not being displayed despite the flag
---------
Co-authored-by: siilats <keith@siilats.com>
* Ability to configure custom service
* Add example preset templates, rename module
* Custom service client impl
* Add YOU API integration
* Remove/ignore generated antlr classes
* Remove text completion models(deprecated)
* Remove unused code, fix settings state sync
* Display model name/icon in the tool window
* Update chat history UI
* Fix model/service sync
* Clear plugin state
* Fix minor bugs, add settings sync tests
* UI changes
* Separate model configuration
* Add support for overriding the completion path
* Update Find Bugs prompt