Commit graph

83 commits

Author SHA1 Message Date
Jack Boswell
e40630d796
feat: Implement Ollama as a high-level service (#510)
* Initial implementation of Ollama as a service

* Fix model selector in tool window

* Enable image attachment

* Rewrite OllamaSettingsForm in Kt

* Create OllamaInlineCompletionModel and use it for building completion template

* Add support for blocking code completion on models that we don't know support it

* Allow disabling code completion settings

* Disable code completion settings when an unsupported model is entered

* Track FIM template in settings as a derived state

* Update llm-client

* Initial implementation of model combo box

* Add Ollama icon and display models as list

* Make OllamaSettingsState immutable & convert OllamaSettings to Kotlin

* Add refresh models button

* Distinguish between empty/needs refresh/loading

* Avoid storing any model if the combo box is empty

* Fix icon size

* Back to mutable settings
There were some bugs with immutable settings

* Store available models in settings state

* Expose available models in model dropdown

* Add dark icon

* Cleanups for CompletionRequestProvider

* Fix checkstyle issues

* refactor: migrate to SimplePersistentStateComponent

* fix: add code completion stop tokens

* fix: display only one item in the model popup action group

* fix: add back multi model selection

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-05-08 01:11:13 +03:00
Carl-Robert Linnupuu
13c59cc97b fix: build 2024-05-07 18:20:06 +03:00
Phil
2dfb1b0800
fix: Storing HuggingFaceModel by modelName instead of quantization only (#529) 2024-05-07 18:14:19 +03:00
Phil
2c0a28a912
feat: add CodeGemma InfillPromptTemplate (#530) 2024-05-07 17:51:04 +03:00
Phil
1415f387ff
fix: focus on new editor action and refresh editor actions on apply (#518) 2024-04-27 23:49:36 +03:00
Rene Leonhardt
a9e147ffc7
fix: NPE when using unsupported model for code completions (#499) 2024-04-24 10:24:44 +03:00
Rene Leonhardt
9823010526
feat: Add Llama 3 download sizes (#498) 2024-04-23 17:30:40 +03:00
Simon Svensson
14f3254913
feat: code completion for "Custom OpenAI Service" (#476)
* Add code completion setting states for custom service

* Add settings for code completion in Custom OpenAI service

* Move code completion section to the bottom

* Create test testFetchCodeCompletionCustomService

* Add Custom OpenAI to the "Enable/Disable Completion" actions

* New configuration UI separating /v1/chat/completions from /v1/completions

* Code completion for Custom Service

* Formatting fixes

* Move prefix and suffix to templates in body

* Message updates

* New tabbed UI for Chat and Code Completions

* convert to kotlin, improve ui and other minor changes

* fix test connection for chat completions

* add help tooltips

* allow backward compatibility

* support prefix and suffix placeholders

* fix initial state loading

---------

Co-authored-by: Jack Boswell (boswelja) <boswelja@outlook.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-04-20 23:23:08 +03:00
Phil
c8181a62e4
feat: add input field for llama server build parameters and improve error handling (#481) 2024-04-20 23:18:43 +03:00
Rene Leonhardt
b202d46984
fix: High CPU usage in new files check (#446) (#474)
* fix: High CPU usage in new files check (#446)

* Resolve absolute path
2024-04-18 16:36:49 +03:00
Simon Svensson
b2d9442eba
fix: custom OpenAI service settings sync (#472) 2024-04-17 12:46:21 +03:00
René
2221d72430
feat: add support for placeholders in prompts (#458)
* fixes #432 adds support for Placeholders in Prompts

- activate gradle plugin Git4Idea
- adds PlaceholderUtil
- adds DATE_ISO_8601 PlaceholderReplacer
- adds BRANCH_NAME PlaceholderReplacer

* convert to kotlin, improve ui and add int. test

* fix: do not reuse projects from previous test runs

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-04-17 11:41:21 +03:00
Rene Leonhardt
5f16213bd1
fix: Use System Prompt from user configuration (#454) (#455) 2024-04-15 11:42:42 +03:00
Rene Leonhardt
6de7696877
fix: Use correct setting for updates and screenshot checkboxes (#446) (#451) 2024-04-11 17:54:28 +03:00
Rene Leonhardt
7d89650062
chore: Improve code (#442)
* chore: Improve code

* Convert classes to records
2024-04-10 14:47:38 +03:00
Carl-Robert Linnupuu
4688a1c8d0 refactor: remove 'Standard' prefix from toolwindow component class names, and other minor cleanup 2024-04-07 16:45:04 +03:00
Carl-Robert Linnupuu
f0172722c7 feat: add support for configuring code completions via settings 2024-04-03 02:02:15 +03:00
Carl-Robert
8cf5720db9
feat: OpenAI and Claude vision support (#430)
* feat: add OpenAI and Claude vision support

* refactor: replace awaitility with PlatformTestUtil.waitWithEventsDispatching

* feat: display error when image not found

* chore: bump llm-client

* feat: configurable file watcher and minor code cleanup

* fix: ensure image notifications are triggered only for image file types

* docs: update changelog

* fix: user textarea icon button behaviour

* refactor: minor cleanup
2024-04-02 02:50:41 +03:00
Carl-Robert Linnupuu
6255bf9eb6 fix: preload credentials to avoid long running tasks on EDT 2024-03-28 00:09:49 +02:00
Carl-Robert Linnupuu
c0c02d9afb refactor: remove custom Azure service configuration 2024-03-14 14:58:58 +02:00
Carl-Robert Linnupuu
a7610acfa1 fix: couple of intellij platform warnings 2024-03-13 16:47:00 +02:00
Carl-Robert Linnupuu
1edea138cf chore: bump sinceBuild and javaVersion 2024-03-13 11:53:15 +02:00
Dmitry Melanchenko
12cf5198f8
feat: implement support for You Pro modes (#399)
* Implement support for You Pro modes: Default, Agent, Custom with various 3rd party models and Research

* Update list of You modes/models depending on user having subscription

* add default value for chatMode
2024-03-11 22:25:33 +02:00
Carl-Robert Linnupuu
74e0db5eb6 fix: add default api version 2024-03-06 15:07:58 +02:00
Carl-Robert
9706a357d2
feat: support claude completions (#398) 2024-03-06 12:48:29 +02:00
Carl-Robert Linnupuu
88946343c5 fix: custom service request body value conversions 2024-02-24 17:06:52 +02:00
Carl-Robert Linnupuu
557f9b0ca0 fix: custom service request body serialization 2024-02-24 01:12:21 +02:00
Carl-Robert
8507c779b1
feat: support custom OpenAI-compatible service (#383) 2024-02-23 17:41:44 +02:00
Oleksii Maryshchenko
6e1a116ed2
feat: enable remote server settings for Windows + Mixtral Instruct template (#378)
* Enable remote llama cpp server for Windows.

* Mixtral instruct template was added.
2024-02-21 00:03:06 +02:00
Carl-Robert Linnupuu
b059aeac6c fix: general settings isModified state 2024-02-19 01:11:29 +02:00
Carl-Robert Linnupuu
08cb81dabf refactor: openai settings form 2024-02-19 00:56:10 +02:00
Carl-Robert Linnupuu
d475ddb36f feat: support custom openai model configuration 2024-02-19 00:46:28 +02:00
PhilKes
056276d626 fix: Skip AbstractCredentialsManager.setCredential if credential is null 2024-02-09 01:37:08 +02:00
Carl-Robert
93145098f5
feat: settings and credentials refactoring (#360)
* refactor service credential managers

* refactor azure settings

* refactor openai settings

* refactor llama settings

* refactor you settings

* refactor included files settings

* refactor general settings

* refactor advanced settings

* fix advanced settings component init

* refactor project structure

* refactor service settings forms

* remove openai quota exceeded field validator

* fix credential modified conditions

* fix and rearrange minor stuff

* fix you auth logic, add credential cache
2024-02-08 01:02:08 +02:00
Carl-Robert Linnupuu
097f0914bf refactor: extract configuration state into standalone class 2024-02-07 02:13:22 +02:00
Carl-Robert Linnupuu
d0132c6c34 refactor: clean up unused configuration 2024-02-07 00:49:16 +02:00
Phil
cceba88c35
Allow using existing Llama Server instead of running locally (#345)
* Add setting to use existing Llama server

* minor UI improvements

* support infill template configuration

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-02-02 12:24:41 +02:00
Phil
7387cf4536
Inline Autocompletion Pt.2 (#333)
* Add first draft of inline code completion with mock text

* Adds InsertInlineTextAction for inserting autocomplete suggestion with tab

- Changed to disable suggestions when text is selected
- Adds and removes the insert action based on when it shows the inlay hint

* Request inline code completion

* Move inline completion prompt into txt file

* Add inline completion settings to ConfigurationState

* Fix code style

* Use EditorTrackerListener instead of EditorFactoryListener to enable inline completion

* Code completion requests synchronously without SSE

* Use LlamaClient.getInfill() for inline code completion

* support inlay block element rendering, clean up code

* Use only enclosed Method or Class contents for code completion if possible

* Refactor extracting PsiElement contents in code completion

* bump llm-client

* fix completion call from triggering on EDT, force method params to be nonnull by default

* refactor request building, decrease delay value

* Trigger code completion if cursor is not inside a word

* Improve inlay rendering

* Support cancellable infill requests

* add statusbar widget, disable completions by default

* Show error notification if code completion failed

* Truely disable/enable EditorInlayHandler when completion is turned off/on

* Add CodeCompletionEnabledListener Topic to control enabling/disabling code-completion

* Add progress indicator for code-completion with option to cancel

* Add CodeCompletionServiceTest + refactor inlay ElementRenderers

* several improvements

- replace timer implementation with call debouncing
- use OpenAI /v1/completions API for completions
- code refactoring

* trigger progress indicator only for llama completions

* fix tests

---------

Co-authored-by: James Higgins <james.isaac.higgins@gmail.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-01-31 01:05:31 +02:00
Phil
390d8cdd5e
Add setting for custom LLama server executable (#344) 2024-01-30 11:22:22 +02:00
Carl-Robert
f831a1facd
feat: add support for auto resolving compilation errors (#318) 2023-12-29 16:41:47 +02:00
Carl-Robert Linnupuu
e230640063 feat: extract llama request settings to its own state, improve UI/UX 2023-12-21 14:46:45 +02:00
Aliet Expósito García
9d83107dd5
Add support for some extended parameters of llama.cpp(top_k, top_p, min_p, and repeat_penalty) (#311)
* Add support for some extended parameters of llama.cpp(top_k, top_p, min_p, and repeat_penalty)

Added 'top_k,' 'top_p,' 'min_p,' and 'repeat_penalty' fields to the llama.cpp request configuration. The default values for these fields match the defaults of llama.cpp. If left untouched, they do not affect the model's response to the request.

* Bump llm-client

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2023-12-18 11:53:23 +02:00
Carl-Robert
f4be25bdac
Feature: Support chatting with multiple files (#306)
* Initial implementation

* Refactor UI related classes and organize imports

* Display selected files notification, include the files in the prompt

* feat: store referenced file paths in the messate state

* feat: add selected files accordion

* feat: update UI

* feat: improve file selection

* feat: support prompt template configuration

* fix: token calculation for virtualfile checkbox tree

* refactor: clean up

* refactor: move labels/descriptions to bundle
2023-12-12 22:30:39 +02:00
René
c214b59f55
adds: configuration for the commit-message system prompt (#304)
* adds: configuration for the commit-message system prompt

this will remove the default file and move it to the code to be overwritten if the user chooses to modify the prompt.

* fix: checkstyle

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2023-12-09 14:48:10 +02:00
Carl-Robert Linnupuu
cfe89fccb7 refactor: remove you.com coupon 2023-12-08 02:32:49 +02:00
Carl-Robert Linnupuu
425b0cd58b refactor: improve llm-client code modularity 2023-12-07 21:48:12 +02:00
Carl-Robert Linnupuu
46b88a4952 fix: settings state on server failure 2023-12-03 18:43:20 +02:00
Carl-Robert Linnupuu
0e61bee0f8 feat: improve llama server logging 2023-12-03 18:10:39 +02:00
Carl-Robert Linnupuu
1392775940 feat: display notification on plugin updates 2023-12-02 01:14:37 +02:00
Viktor
92dbbb4a4d
Local LLM: Added empty check for Additional parameters field (#295)
Co-authored-by: Viktor <viktor.hoshyi@gg4l.com>
2023-11-28 20:15:22 +02:00