* feat: support personas
* fix: replace previous system prompts with personas
* feat: add persona toolbar label
* refactor: rename properties
* refactor: clean up
* fix: personas settings configurable state
* refactor: code cleanup
* feat: list item auto highlightning
* feat: replace personas toolbar label with action link
* refactor: code cleanup
* fix: manual items not being able to delete
* fix: personas settings configurable state
* refactor: clean up code
* fix: folder selection
* feat: Show server name in start/stop notifications
* feat: Show opposite action in notification
* feat: Pre-select biggest downloaded parameter size on model change
* chore: Update to latest llama.cpp fixes (2024-05-14)
* fix: extract services to their own configurables
* feat: switch to selected provider automatically upon apply
* fix: credentials loading at once
* fix: rename llama.cpp title
* Initial implementation of Ollama as a service
* Fix model selector in tool window
* Enable image attachment
* Rewrite OllamaSettingsForm in Kt
* Create OllamaInlineCompletionModel and use it for building completion template
* Add support for blocking code completion on models that we don't know support it
* Allow disabling code completion settings
* Disable code completion settings when an unsupported model is entered
* Track FIM template in settings as a derived state
* Update llm-client
* Initial implementation of model combo box
* Add Ollama icon and display models as list
* Make OllamaSettingsState immutable & convert OllamaSettings to Kotlin
* Add refresh models button
* Distinguish between empty/needs refresh/loading
* Avoid storing any model if the combo box is empty
* Fix icon size
* Back to mutable settings
There were some bugs with immutable settings
* Store available models in settings state
* Expose available models in model dropdown
* Add dark icon
* Cleanups for CompletionRequestProvider
* Fix checkstyle issues
* refactor: migrate to SimplePersistentStateComponent
* fix: add code completion stop tokens
* fix: display only one item in the model popup action group
* fix: add back multi model selection
---------
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* Add code completion setting states for custom service
* Add settings for code completion in Custom OpenAI service
* Move code completion section to the bottom
* Create test testFetchCodeCompletionCustomService
* Add Custom OpenAI to the "Enable/Disable Completion" actions
* New configuration UI separating /v1/chat/completions from /v1/completions
* Code completion for Custom Service
* Formatting fixes
* Move prefix and suffix to templates in body
* Message updates
* New tabbed UI for Chat and Code Completions
* convert to kotlin, improve ui and other minor changes
* fix test connection for chat completions
* add help tooltips
* allow backward compatibility
* support prefix and suffix placeholders
* fix initial state loading
---------
Co-authored-by: Jack Boswell (boswelja) <boswelja@outlook.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* fixes#432 adds support for Placeholders in Prompts
- activate gradle plugin Git4Idea
- adds PlaceholderUtil
- adds DATE_ISO_8601 PlaceholderReplacer
- adds BRANCH_NAME PlaceholderReplacer
* convert to kotlin, improve ui and add int. test
* fix: do not reuse projects from previous test runs
---------
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* feat: add OpenAI and Claude vision support
* refactor: replace awaitility with PlatformTestUtil.waitWithEventsDispatching
* feat: display error when image not found
* chore: bump llm-client
* feat: configurable file watcher and minor code cleanup
* fix: ensure image notifications are triggered only for image file types
* docs: update changelog
* fix: user textarea icon button behaviour
* refactor: minor cleanup
* Add setting to use existing Llama server
* minor UI improvements
* support infill template configuration
---------
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* Add first draft of inline code completion with mock text
* Adds InsertInlineTextAction for inserting autocomplete suggestion with tab
- Changed to disable suggestions when text is selected
- Adds and removes the insert action based on when it shows the inlay hint
* Request inline code completion
* Move inline completion prompt into txt file
* Add inline completion settings to ConfigurationState
* Fix code style
* Use EditorTrackerListener instead of EditorFactoryListener to enable inline completion
* Code completion requests synchronously without SSE
* Use LlamaClient.getInfill() for inline code completion
* support inlay block element rendering, clean up code
* Use only enclosed Method or Class contents for code completion if possible
* Refactor extracting PsiElement contents in code completion
* bump llm-client
* fix completion call from triggering on EDT, force method params to be nonnull by default
* refactor request building, decrease delay value
* Trigger code completion if cursor is not inside a word
* Improve inlay rendering
* Support cancellable infill requests
* add statusbar widget, disable completions by default
* Show error notification if code completion failed
* Truely disable/enable EditorInlayHandler when completion is turned off/on
* Add CodeCompletionEnabledListener Topic to control enabling/disabling code-completion
* Add progress indicator for code-completion with option to cancel
* Add CodeCompletionServiceTest + refactor inlay ElementRenderers
* several improvements
- replace timer implementation with call debouncing
- use OpenAI /v1/completions API for completions
- code refactoring
* trigger progress indicator only for llama completions
* fix tests
---------
Co-authored-by: James Higgins <james.isaac.higgins@gmail.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* Add support for some extended parameters of llama.cpp(top_k, top_p, min_p, and repeat_penalty)
Added 'top_k,' 'top_p,' 'min_p,' and 'repeat_penalty' fields to the llama.cpp request configuration. The default values for these fields match the defaults of llama.cpp. If left untouched, they do not affect the model's response to the request.
* Bump llm-client
---------
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
* Initial implementation
* Refactor UI related classes and organize imports
* Display selected files notification, include the files in the prompt
* feat: store referenced file paths in the messate state
* feat: add selected files accordion
* feat: update UI
* feat: improve file selection
* feat: support prompt template configuration
* fix: token calculation for virtualfile checkbox tree
* refactor: clean up
* refactor: move labels/descriptions to bundle
* adds: configuration for the commit-message system prompt
this will remove the default file and move it to the code to be overwritten if the user chooses to modify the prompt.
* fix: checkstyle
---------
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>