Carl-Robert Linnupuu
d817e950a5
chore: update toolwindow landing panel text content
2024-05-15 00:11:15 +03:00
Rene Leonhardt
d953b0320c
feat: Show server name in start/stop notifications ( #546 )
...
* feat: Show server name in start/stop notifications
* feat: Show opposite action in notification
* feat: Pre-select biggest downloaded parameter size on model change
* chore: Update to latest llama.cpp fixes (2024-05-14)
2024-05-15 00:11:10 +03:00
Carl-Robert Linnupuu
d38af4226d
Merge remote-tracking branch 'origin/master' into platform/2024.1
2024-05-14 00:11:17 +03:00
Carl-Robert Linnupuu
de3db77755
feat: add gpt-4o model ( closes #547 )
2024-05-14 00:03:45 +03:00
Carl-Robert Linnupuu
fe7a33ac2a
Merge remote-tracking branch 'origin/master' into platform/2024.1
2024-05-13 19:08:56 +03:00
Carl-Robert Linnupuu
864f442db1
fix: landing page hyperlinks
2024-05-13 19:04:46 +03:00
Rene Leonhardt
7c668ae143
feat: Start/stop LLaMA Server from statusbar ( #544 )
2024-05-13 19:02:22 +03:00
Carl-Robert Linnupuu
1a55798997
Merge remote-tracking branch 'origin/master' into platform/2024.1
2024-05-13 17:59:07 +03:00
Carl-Robert Linnupuu
91c7302008
refactor: remove llama download marker from toolwindow popup menu
2024-05-13 17:56:15 +03:00
Carl-Robert Linnupuu
d9d7c65688
Merge remote-tracking branch 'origin/master' into platform/2024.1
2024-05-13 15:42:09 +03:00
Carl-Robert Linnupuu
48e641fc59
Merge branch 'master' of github.com:carlrobertoh/CodeGPT
2024-05-13 15:36:12 +03:00
Carl-Robert Linnupuu
014f26f802
refactor: remove max_tokens configuration and other minor fixes
2024-05-13 15:32:20 +03:00
Rene Leonhardt
9bd7e6e83a
feat: Visualize downloaded models ( #543 )
...
* feat: Visualize downloaded models
* Simplify GeneralSettings access
2024-05-13 10:48:55 +03:00
Carl-Robert Linnupuu
0b21652c04
fix: lookup completion request validation
2024-05-11 02:18:24 +03:00
Phil
fcd0808111
feat: add keyboard shortcuts for Editor actions ( #542 )
2024-05-10 17:10:29 +03:00
Rene Leonhardt
725bf84ac8
fix: Handle problems graciously ( #541 )
2024-05-10 15:20:48 +03:00
Carl-Robert Linnupuu
47d1d5dea8
fix: store empty string as credential to avoid repeated secret fetching
2024-05-09 16:18:35 +03:00
Carl-Robert Linnupuu
310210957b
fix: lookup and commit message completions for codegpt provider
2024-05-09 15:41:04 +03:00
Carl-Robert Linnupuu
8883200817
Merge remote-tracking branch 'origin/master' into platform/2024.1
2024-05-09 14:00:56 +03:00
Rene Leonhardt
59acb59843
chore: Update to CodeGemma 1.1 7b Instruct ( #534 )
2024-05-09 13:08:55 +03:00
Carl-Robert Linnupuu
fedbe11fd2
fix: long-running tasks on EDT when initializing forms
2024-05-09 13:05:38 +03:00
Carl-Robert
7bee59a90e
feat: extract providers into their standalone configurables ( #538 )
...
* fix: extract services to their own configurables
* feat: switch to selected provider automatically upon apply
* fix: credentials loading at once
* fix: rename llama.cpp title
2024-05-09 11:16:09 +03:00
Carl-Robert
0852c27170
feat: add CodeGPT "native" API provider ( #537 )
...
* feat: support codegpt client
* feat: add basic request handler test
* refactor: minor cleanup
2024-05-08 23:59:51 +03:00
Phil
74fc2e6219
feat: add Google Gemini API support ( #535 )
2024-05-08 16:51:32 +03:00
Phil
5d2bc13f8c
fix: refresh Ollama models only when service is changed to Ollama ( #536 )
2024-05-08 16:07:00 +03:00
Phil
dcd0a3fc51
Revert "fix: use /infill for llama.cpp code-completions ( #513 )" ( #533 )
...
This reverts commit 8de72b3301 .
2024-05-08 16:06:14 +03:00
Rene Leonhardt
ee16bfee10
feat: Support CodeQwen1.5-Chat model ( #527 )
...
* feat: Support CodeQwen1.5-Chat model
* Declare model directories explicitly
2024-05-08 16:05:51 +03:00
Jack Boswell
e40630d796
feat: Implement Ollama as a high-level service ( #510 )
...
* Initial implementation of Ollama as a service
* Fix model selector in tool window
* Enable image attachment
* Rewrite OllamaSettingsForm in Kt
* Create OllamaInlineCompletionModel and use it for building completion template
* Add support for blocking code completion on models that we don't know support it
* Allow disabling code completion settings
* Disable code completion settings when an unsupported model is entered
* Track FIM template in settings as a derived state
* Update llm-client
* Initial implementation of model combo box
* Add Ollama icon and display models as list
* Make OllamaSettingsState immutable & convert OllamaSettings to Kotlin
* Add refresh models button
* Distinguish between empty/needs refresh/loading
* Avoid storing any model if the combo box is empty
* Fix icon size
* Back to mutable settings
There were some bugs with immutable settings
* Store available models in settings state
* Expose available models in model dropdown
* Add dark icon
* Cleanups for CompletionRequestProvider
* Fix checkstyle issues
* refactor: migrate to SimplePersistentStateComponent
* fix: add code completion stop tokens
* fix: display only one item in the model popup action group
* fix: add back multi model selection
---------
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-05-08 01:11:13 +03:00
Phil
7f7b35d3be
fix: CustomService Test connection with correct settings ( #531 )
2024-05-07 18:34:35 +03:00
Carl-Robert Linnupuu
13c59cc97b
fix: build
2024-05-07 18:20:06 +03:00
Phil
2dfb1b0800
fix: Storing HuggingFaceModel by modelName instead of quantization only ( #529 )
2024-05-07 18:14:19 +03:00
Phil
33aa0e1065
feat: add Mistral AI service template ( #532 )
2024-05-07 18:01:07 +03:00
Phil
2c0a28a912
feat: add CodeGemma InfillPromptTemplate ( #530 )
2024-05-07 17:51:04 +03:00
Rene Leonhardt
a2a8747aca
feat: Support CodeGemma 7b Instruct model ( #524 ) ( #525 )
2024-05-07 10:43:14 +03:00
Jack Boswell
f44fab551b
refactor: Expand and explicitly handle cases where a ServiceType is checked ( #521 )
...
This streamlines changes to ServiceType, where any additions or removals will be flagged at compile time to be handled, instead of silently falling back to a default value.
2024-05-07 10:42:45 +03:00
Phil
e0f54a6b93
fix: add optional Git4Idea dependency to plugin.xml ( #526 )
2024-05-07 10:41:10 +03:00
Rene Leonhardt
6d6e0a3ccb
feat: Support Phi-3 Mini model ( #516 )
2024-04-27 23:50:03 +03:00
Phil
1415f387ff
fix: focus on new editor action and refresh editor actions on apply ( #518 )
2024-04-27 23:49:36 +03:00
Phil
8de72b3301
fix: use /infill for llama.cpp code-completions ( #513 )
2024-04-25 16:47:56 +03:00
Carl-Robert Linnupuu
7d05d17797
fix: commit message generation for custom openai services ( closes #496 )
2024-04-25 15:21:08 +03:00
Rene Leonhardt
a9e147ffc7
fix: NPE when using unsupported model for code completions ( #499 )
2024-04-24 10:24:44 +03:00
Rene Leonhardt
9823010526
feat: Add Llama 3 download sizes ( #498 )
2024-04-23 17:30:40 +03:00
Carl-Robert Linnupuu
ddf2eeef2e
fix: kotlin build interoperability
2024-04-22 12:13:55 +03:00
Carl-Robert Linnupuu
9c61b06e0f
fix: kotlin build interoperability
2024-04-22 12:04:58 +03:00
Carl-Robert Linnupuu
e7ef58ad3d
Merge remote-tracking branch 'origin/master' into platform/2024.1
2024-04-22 11:49:37 +03:00
Carl-Robert Linnupuu
ed9397c3dd
fix: llama server success callback trigger
2024-04-21 23:25:53 +03:00
Carl-Robert Linnupuu
7899429d4f
fix: llama3 prompt
2024-04-21 23:01:33 +03:00
Rene Leonhardt
a10b5f791a
feat: Upgrade submodule for Llama 3 support ( #483 )
2024-04-21 17:12:14 +03:00
Carl-Robert Linnupuu
39679d9ee9
fix: custom service settings sync
2024-04-21 01:39:26 +03:00
Rene Leonhardt
6e6a499105
feat: Support Llama 3 model ( #479 )
...
* feat: Support Llama 3 model (#478 )
* Use new InfillPrompt
* Switch to lmstudio-community
* Use new Prompt
* llama.cpp removed the BOS token
https://github.com/ggerganov/llama.cpp/pull/6751/commits/a55d8a9348fc9e9215229bf03f96ecff4dcc7c91
* Add tests
* I would prefer a stream based solution
* Add 70B models
* Add tests for skipping blank system prompt
* Remove InfillPrompt for now
2024-04-21 01:12:13 +03:00