Commit graph

307 commits

Author SHA1 Message Date
Phil
6aee749ade feat: add OpenRouter service template (#581) 2024-06-06 15:29:15 +03:00
Phil
e96f6a418a feat: add field for environment variables for Llama server (#550)
Co-authored-by: Carl-Robert <carlrobertoh@gmail.com>
2024-06-06 15:29:10 +03:00
Phil
4c2f62d66d fix: remove trailing slashes from URL text fields (#579) 2024-06-06 15:29:03 +03:00
Rene Leonhardt
235418e9ac feat: Support Phi-3 Medium 128K (#577) 2024-06-06 15:28:59 +03:00
Rene Leonhardt
1ef8afbc73 feat: Support Stable Code Instruct 3B (#552)
* feat: Support Stable Code Instruct 3B

* feat: Sort LLaMA models in settings
2024-06-06 15:28:54 +03:00
Carl-Robert Linnupuu
893a125adc fix: backward compatibility issues on plugin update (fixes #551) 2024-05-15 00:13:37 +03:00
Carl-Robert Linnupuu
8089cd9c7e chore: update toolwindow landing panel text content 2024-05-15 00:13:34 +03:00
Rene Leonhardt
44b61a443a feat: Show server name in start/stop notifications (#546)
* feat: Show server name in start/stop notifications

* feat: Show opposite action in notification

* feat: Pre-select biggest downloaded parameter size on model change

* chore: Update to latest llama.cpp fixes (2024-05-14)
2024-05-15 00:13:30 +03:00
Carl-Robert Linnupuu
14eab9d48c fix: compatibility issue 2024-05-14 00:22:22 +03:00
Carl-Robert Linnupuu
a18b74fe26 feat: add gpt-4o model (closes #547) 2024-05-14 00:10:38 +03:00
Carl-Robert Linnupuu
a0b14551c7 fix: landing page hyperlinks 2024-05-13 19:09:38 +03:00
Rene Leonhardt
a26b8f29ff feat: Start/stop LLaMA Server from statusbar (#544) 2024-05-13 19:09:32 +03:00
Carl-Robert Linnupuu
3c8778ae98 refactor: remove llama download marker from toolwindow popup menu 2024-05-13 17:58:35 +03:00
Carl-Robert Linnupuu
b65edd86a0 fix: build 2024-05-13 16:31:24 +03:00
Carl-Robert Linnupuu
1373e96eaa refactor: remove max_tokens configuration and other minor fixes 2024-05-13 16:04:40 +03:00
Rene Leonhardt
c4b65e7a53 feat: Visualize downloaded models (#543)
* feat: Visualize downloaded models

* Simplify GeneralSettings access
2024-05-13 16:02:27 +03:00
Carl-Robert Linnupuu
4dbe0f5532 fix: lookup completion request validation 2024-05-13 16:01:49 +03:00
Phil
d1bf0025c4 feat: add keyboard shortcuts for Editor actions (#542) 2024-05-13 16:01:44 +03:00
Rene Leonhardt
5440fc7469 fix: Handle problems graciously (#541) 2024-05-13 16:01:34 +03:00
Carl-Robert Linnupuu
8f793fdfa9 fix: store empty string as credential to avoid repeated secret fetching 2024-05-13 16:00:11 +03:00
Carl-Robert Linnupuu
2e961bcb6a fix: lookup and commit message completions for codegpt provider 2024-05-13 16:00:05 +03:00
Rene Leonhardt
3f75057b8f chore: Update to CodeGemma 1.1 7b Instruct (#534) 2024-05-13 15:59:50 +03:00
Carl-Robert Linnupuu
52b5bcce96 fix: long-running tasks on EDT when initializing forms 2024-05-13 15:59:42 +03:00
Carl-Robert
4d0166b0ff feat: extract providers into their standalone configurables (#538)
* fix: extract services to their own configurables

* feat: switch to selected provider automatically upon apply

* fix: credentials loading at once

* fix: rename llama.cpp title
2024-05-13 15:59:14 +03:00
Carl-Robert
61613336bb feat: add CodeGPT "native" API provider (#537)
* feat: support codegpt client

* feat: add basic request handler test

* refactor: minor cleanup
2024-05-13 15:59:04 +03:00
Phil
9677e10d6b feat: add Google Gemini API support (#535) 2024-05-13 15:58:14 +03:00
Phil
143fea57fa fix: refresh Ollama models only when service is changed to Ollama (#536) 2024-05-13 15:57:48 +03:00
Phil
a173840ee5 Revert "fix: use /infill for llama.cpp code-completions (#513)" (#533)
This reverts commit 8de72b3301.
2024-05-13 15:57:40 +03:00
Rene Leonhardt
d743b60bfe feat: Support CodeQwen1.5-Chat model (#527)
* feat: Support CodeQwen1.5-Chat model

* Declare model directories explicitly
2024-05-13 15:57:13 +03:00
Jack Boswell
5fb6589dc6 feat: Implement Ollama as a high-level service (#510)
* Initial implementation of Ollama as a service

* Fix model selector in tool window

* Enable image attachment

* Rewrite OllamaSettingsForm in Kt

* Create OllamaInlineCompletionModel and use it for building completion template

* Add support for blocking code completion on models that we don't know support it

* Allow disabling code completion settings

* Disable code completion settings when an unsupported model is entered

* Track FIM template in settings as a derived state

* Update llm-client

* Initial implementation of model combo box

* Add Ollama icon and display models as list

* Make OllamaSettingsState immutable & convert OllamaSettings to Kotlin

* Add refresh models button

* Distinguish between empty/needs refresh/loading

* Avoid storing any model if the combo box is empty

* Fix icon size

* Back to mutable settings
There were some bugs with immutable settings

* Store available models in settings state

* Expose available models in model dropdown

* Add dark icon

* Cleanups for CompletionRequestProvider

* Fix checkstyle issues

* refactor: migrate to SimplePersistentStateComponent

* fix: add code completion stop tokens

* fix: display only one item in the model popup action group

* fix: add back multi model selection

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-05-13 15:57:00 +03:00
Phil
7b386686b0 fix: CustomService Test connection with correct settings (#531) 2024-05-13 15:55:28 +03:00
Carl-Robert Linnupuu
43ec887b17 fix: build 2024-05-13 15:55:22 +03:00
Phil
b70aad9980 fix: Storing HuggingFaceModel by modelName instead of quantization only (#529) 2024-05-13 15:55:17 +03:00
Phil
98ae7698e4 feat: add Mistral AI service template (#532) 2024-05-13 15:55:12 +03:00
Phil
908d8b9286 feat: add CodeGemma InfillPromptTemplate (#530) 2024-05-13 15:55:08 +03:00
Rene Leonhardt
af2391ac9a feat: Support CodeGemma 7b Instruct model (#524) (#525) 2024-05-13 15:55:02 +03:00
Jack Boswell
c37f02b6ad refactor: Expand and explicitly handle cases where a ServiceType is checked (#521)
This streamlines changes to ServiceType, where any additions or removals will be flagged at compile time to be handled, instead of silently falling back to a default value.
2024-05-13 15:54:54 +03:00
Phil
da0f9a189e fix: add optional Git4Idea dependency to plugin.xml (#526) 2024-05-13 15:54:12 +03:00
Rene Leonhardt
f557d02748 feat: Support Phi-3 Mini model (#516) 2024-05-13 15:54:06 +03:00
Phil
5cd9531246 fix: focus on new editor action and refresh editor actions on apply (#518) 2024-05-13 15:54:00 +03:00
Phil
1842c98084 fix: use /infill for llama.cpp code-completions (#513) 2024-05-13 15:53:46 +03:00
Carl-Robert Linnupuu
28aac5a369 fix: commit message generation for custom openai services (closes #496) 2024-05-13 15:52:53 +03:00
Rene Leonhardt
10b8535347 fix: NPE when using unsupported model for code completions (#499) 2024-05-13 15:52:48 +03:00
Rene Leonhardt
f7ce60e1a4 feat: Add Llama 3 download sizes (#498) 2024-05-13 15:52:44 +03:00
Carl-Robert Linnupuu
b0ba7f998e fix: kotlin build interoperability 2024-04-22 12:20:36 +03:00
Carl-Robert Linnupuu
86cc6f08f0 fix: compatibility issues 2024-04-22 00:16:21 +03:00
Carl-Robert Linnupuu
ff528e91e0 fix: build errors 2024-04-22 00:07:39 +03:00
Carl-Robert Linnupuu
69dc2ea9fc fix: llama server success callback trigger 2024-04-22 00:00:41 +03:00
Carl-Robert Linnupuu
477d51ce7c fix: llama3 prompt 2024-04-22 00:00:35 +03:00
Rene Leonhardt
649e7c93c9 feat: Upgrade submodule for Llama 3 support (#483) 2024-04-21 23:58:10 +03:00