Commit graph

420 commits

Author SHA1 Message Date
Phil
dcd0a3fc51
Revert "fix: use /infill for llama.cpp code-completions (#513)" (#533)
This reverts commit 8de72b3301.
2024-05-08 16:06:14 +03:00
Rene Leonhardt
ee16bfee10
feat: Support CodeQwen1.5-Chat model (#527)
* feat: Support CodeQwen1.5-Chat model

* Declare model directories explicitly
2024-05-08 16:05:51 +03:00
Jack Boswell
e40630d796
feat: Implement Ollama as a high-level service (#510)
* Initial implementation of Ollama as a service

* Fix model selector in tool window

* Enable image attachment

* Rewrite OllamaSettingsForm in Kt

* Create OllamaInlineCompletionModel and use it for building completion template

* Add support for blocking code completion on models that we don't know support it

* Allow disabling code completion settings

* Disable code completion settings when an unsupported model is entered

* Track FIM template in settings as a derived state

* Update llm-client

* Initial implementation of model combo box

* Add Ollama icon and display models as list

* Make OllamaSettingsState immutable & convert OllamaSettings to Kotlin

* Add refresh models button

* Distinguish between empty/needs refresh/loading

* Avoid storing any model if the combo box is empty

* Fix icon size

* Back to mutable settings
There were some bugs with immutable settings

* Store available models in settings state

* Expose available models in model dropdown

* Add dark icon

* Cleanups for CompletionRequestProvider

* Fix checkstyle issues

* refactor: migrate to SimplePersistentStateComponent

* fix: add code completion stop tokens

* fix: display only one item in the model popup action group

* fix: add back multi model selection

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-05-08 01:11:13 +03:00
Phil
7f7b35d3be
fix: CustomService Test connection with correct settings (#531) 2024-05-07 18:34:35 +03:00
Carl-Robert Linnupuu
13c59cc97b fix: build 2024-05-07 18:20:06 +03:00
Phil
2dfb1b0800
fix: Storing HuggingFaceModel by modelName instead of quantization only (#529) 2024-05-07 18:14:19 +03:00
Phil
33aa0e1065
feat: add Mistral AI service template (#532) 2024-05-07 18:01:07 +03:00
Phil
2c0a28a912
feat: add CodeGemma InfillPromptTemplate (#530) 2024-05-07 17:51:04 +03:00
Rene Leonhardt
a2a8747aca
feat: Support CodeGemma 7b Instruct model (#524) (#525) 2024-05-07 10:43:14 +03:00
Jack Boswell
f44fab551b
refactor: Expand and explicitly handle cases where a ServiceType is checked (#521)
This streamlines changes to ServiceType, where any additions or removals will be flagged at compile time to be handled, instead of silently falling back to a default value.
2024-05-07 10:42:45 +03:00
Jack Boswell
5f5c9cbfa1
chore: Bump llm-client to 0.7.5 (#520)
* Bump llm-client to 0.7.3

* llm-client 0.7.5

Co-authored-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com>

---------

Co-authored-by: Rene Leonhardt <65483435+reneleonhardt@users.noreply.github.com>
2024-05-07 10:41:45 +03:00
Phil
e0f54a6b93
fix: add optional Git4Idea dependency to plugin.xml (#526) 2024-05-07 10:41:10 +03:00
Rene Leonhardt
6d6e0a3ccb
feat: Support Phi-3 Mini model (#516) 2024-04-27 23:50:03 +03:00
Phil
1415f387ff
fix: focus on new editor action and refresh editor actions on apply (#518) 2024-04-27 23:49:36 +03:00
Phil
8de72b3301
fix: use /infill for llama.cpp code-completions (#513) 2024-04-25 16:47:56 +03:00
Carl-Robert Linnupuu
7d05d17797 fix: commit message generation for custom openai services (closes #496) 2024-04-25 15:21:08 +03:00
Rene Leonhardt
a9e147ffc7
fix: NPE when using unsupported model for code completions (#499) 2024-04-24 10:24:44 +03:00
Rene Leonhardt
9823010526
feat: Add Llama 3 download sizes (#498) 2024-04-23 17:30:40 +03:00
Carl-Robert Linnupuu
0b2387c2f6 chore(deps): bump deps 2024-04-23 17:21:29 +03:00
Carl-Robert Linnupuu
48aa2f45a2 2.6.3 2024-04-22 12:30:29 +03:00
Carl-Robert Linnupuu
9c61b06e0f fix: kotlin build interoperability 2024-04-22 12:04:58 +03:00
Carl-Robert Linnupuu
ed9397c3dd fix: llama server success callback trigger 2024-04-21 23:25:53 +03:00
Carl-Robert Linnupuu
7899429d4f fix: llama3 prompt 2024-04-21 23:01:33 +03:00
Carl-Robert Linnupuu
62f0fa43bc docs: update plugin description 2024-04-21 18:00:15 +03:00
Carl-Robert Linnupuu
e8002a116c docs: update changelog 2024-04-21 17:38:57 +03:00
Rene Leonhardt
a10b5f791a
feat: Upgrade submodule for Llama 3 support (#483) 2024-04-21 17:12:14 +03:00
Carl-Robert Linnupuu
39679d9ee9 fix: custom service settings sync 2024-04-21 01:39:26 +03:00
Rene Leonhardt
6e6a499105
feat: Support Llama 3 model (#479)
* feat: Support Llama 3 model (#478)

* Use new InfillPrompt

* Switch to lmstudio-community

* Use new Prompt

* llama.cpp removed the BOS token
https://github.com/ggerganov/llama.cpp/pull/6751/commits/a55d8a9348fc9e9215229bf03f96ecff4dcc7c91

* Add tests

* I would prefer a stream based solution

* Add 70B models

* Add tests for skipping blank system prompt

* Remove InfillPrompt for now
2024-04-21 01:12:13 +03:00
Carl-Robert Linnupuu
bcb33aeeeb docs: update readme 2024-04-21 01:09:48 +03:00
Simon Svensson
14f3254913
feat: code completion for "Custom OpenAI Service" (#476)
* Add code completion setting states for custom service

* Add settings for code completion in Custom OpenAI service

* Move code completion section to the bottom

* Create test testFetchCodeCompletionCustomService

* Add Custom OpenAI to the "Enable/Disable Completion" actions

* New configuration UI separating /v1/chat/completions from /v1/completions

* Code completion for Custom Service

* Formatting fixes

* Move prefix and suffix to templates in body

* Message updates

* New tabbed UI for Chat and Code Completions

* convert to kotlin, improve ui and other minor changes

* fix test connection for chat completions

* add help tooltips

* allow backward compatibility

* support prefix and suffix placeholders

* fix initial state loading

---------

Co-authored-by: Jack Boswell (boswelja) <boswelja@outlook.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-04-20 23:23:08 +03:00
Phil
c8181a62e4
feat: add input field for llama server build parameters and improve error handling (#481) 2024-04-20 23:18:43 +03:00
Rene Leonhardt
67dc425a94
fix: Telemetry can't serialize traits anymore (#477)
* fix: Telemetry can't serialize traits anymore

* Add tests
2024-04-19 17:06:37 +03:00
Phil
9666590cb1
feat: add include file in context to editor context menu (#475)
* feat: add include file in context to editor context menu

* fix: custom title for IncludeFilesInContextAction in editor context menu
2024-04-18 18:49:04 +03:00
Rene Leonhardt
29b36c52f8
chore: Convert utils to Kotlin (#473)
* chore: Convert utils to Kotlin

* Remove nullable operators
2024-04-18 17:01:55 +03:00
Rene Leonhardt
b202d46984
fix: High CPU usage in new files check (#446) (#474)
* fix: High CPU usage in new files check (#446)

* Resolve absolute path
2024-04-18 16:36:49 +03:00
Carl-Robert Linnupuu
92d9d5ee20 fix: file watcher disposable by making it project-level service 2024-04-17 16:33:04 +03:00
ChuangLee
63f139dd74
feat: cancel completions early on newline (#461)
* Stream completion results and cancel early on newline

* Rename 'suggestion, needCancel' to 'message, cancel'

* Replace cancelCurrentCall() with eventSource.cancel() for simplicity

* remove isStreaming variable and onComplete() method

* fix: do not trigger completed callbacks during streaming

---------

Co-authored-by: lichuang <lichuanglai8@163.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-04-17 14:34:37 +03:00
Simon Svensson
b2d9442eba
fix: custom OpenAI service settings sync (#472) 2024-04-17 12:46:21 +03:00
Simon Svensson
7d075f6905
Persist credentials back into the PasswordSafe (#465) 2024-04-17 12:04:40 +03:00
René
2221d72430
feat: add support for placeholders in prompts (#458)
* fixes #432 adds support for Placeholders in Prompts

- activate gradle plugin Git4Idea
- adds PlaceholderUtil
- adds DATE_ISO_8601 PlaceholderReplacer
- adds BRANCH_NAME PlaceholderReplacer

* convert to kotlin, improve ui and add int. test

* fix: do not reuse projects from previous test runs

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-04-17 11:41:21 +03:00
Carl-Robert Linnupuu
f6a5113216 2.6.2 2024-04-15 16:03:37 +03:00
Carl-Robert Linnupuu
077059fd23 chore(deps): bump llm-client 2024-04-15 15:51:12 +03:00
Rene Leonhardt
5f16213bd1
fix: Use System Prompt from user configuration (#454) (#455) 2024-04-15 11:42:42 +03:00
Carl-Robert Linnupuu
0dfaa128b7 2.6.1 2024-04-12 18:04:51 +03:00
Carl-Robert Linnupuu
d4690e9796 fix: remove exclusion of okhttp dependency from gradle-intellij-plugin (required for publishPlugin task) 2024-04-12 18:01:21 +03:00
Carl-Robert Linnupuu
2911bc71ce docs: update changelog 2024-04-12 17:17:29 +03:00
Carl-Robert Linnupuu
18a4e80951 chore(deps): bump llm-client 2024-04-12 16:21:54 +03:00
Carl-Robert Linnupuu
a9131430af fix: temporarily disable tree-sitter logic (fixes #452) 2024-04-12 01:57:04 +03:00
Rene Leonhardt
6de7696877
fix: Use correct setting for updates and screenshot checkboxes (#446) (#451) 2024-04-11 17:54:28 +03:00
Rene Leonhardt
0cdd5096ba
chore: Convert Java tests to Kotlin (#447) 2024-04-11 12:03:31 +03:00