Commit graph

197 commits

Author SHA1 Message Date
Carl-Robert Linnupuu
42105bf308 refactor: clean up old code 2024-03-14 14:34:29 +02:00
Carl-Robert Linnupuu
8151a69c7e fix: DeleteConversationAction update thread 2024-03-14 14:06:56 +02:00
Carl-Robert Linnupuu
a7610acfa1 fix: couple of intellij platform warnings 2024-03-13 16:47:00 +02:00
Carl-Robert Linnupuu
1edea138cf chore: bump sinceBuild and javaVersion 2024-03-13 11:53:15 +02:00
Carl-Robert Linnupuu
678768c069 fix: intellij platform warning (#400) 2024-03-12 23:13:16 +02:00
Carl-Robert Linnupuu
8c986fd7de feat: support git commit message generation with custom openai and anthropic service (#390) 2024-03-12 21:27:51 +02:00
Carl-Robert
91dd7bdb43
feat: apply post-processing for code completions (#404) 2024-03-11 23:13:10 +02:00
Dmitry Melanchenko
12cf5198f8
feat: implement support for You Pro modes (#399)
* Implement support for You Pro modes: Default, Agent, Custom with various 3rd party models and Research

* Update list of You modes/models depending on user having subscription

* add default value for chatMode
2024-03-11 22:25:33 +02:00
Carl-Robert Linnupuu
74e0db5eb6 fix: add default api version 2024-03-06 15:07:58 +02:00
Carl-Robert
9706a357d2
feat: support claude completions (#398) 2024-03-06 12:48:29 +02:00
squall
20c31de21d
fix: completion prompt template for Deepseek Coder (#387)
* fix: completion prompt template for Deepseek Coder

* Add stop token
2024-02-29 16:23:29 +02:00
Carl-Robert Linnupuu
88946343c5 fix: custom service request body value conversions 2024-02-24 17:06:52 +02:00
Carl-Robert Linnupuu
eeda43b0e4 feat: support lookup completions for custom openai service 2024-02-24 14:38:51 +02:00
Carl-Robert Linnupuu
557f9b0ca0 fix: custom service request body serialization 2024-02-24 01:12:21 +02:00
Carl-Robert
8507c779b1
feat: support custom OpenAI-compatible service (#383) 2024-02-23 17:41:44 +02:00
jlatiav
c8bb33d9b2
fix: respect proxy settings for azure client (#382) 2024-02-22 12:40:12 +02:00
Oleksii Maryshchenko
9627bbda15
feat: use llama cpp for generation of git commit message. (#380)
* Enable remote llama cpp server for Windows.

* Mixtral instruct template was added.

* Use llama cpp for generation of git commit message.

* style fix
2024-02-22 12:23:22 +02:00
Oleksii Maryshchenko
6e1a116ed2
feat: enable remote server settings for Windows + Mixtral Instruct template (#378)
* Enable remote llama cpp server for Windows.

* Mixtral instruct template was added.
2024-02-21 00:03:06 +02:00
Carl-Robert Linnupuu
29c40a06aa fix: azure credential condition (fixes #375) 2024-02-19 18:17:06 +02:00
Carl-Robert Linnupuu
ad55078107 chore(deps): bump com.knuddels:jtokkit from 0.6.1 to 1.0.0 2024-02-19 14:52:37 +02:00
Carl-Robert Linnupuu
5a88a7d9f3 feat: hide code completion feature for Azure and You service 2024-02-19 14:33:25 +02:00
Carl-Robert Linnupuu
c05b42fddf fix: caret offset location upon document changes (fixes #367) 2024-02-19 14:11:08 +02:00
Carl-Robert Linnupuu
b059aeac6c fix: general settings isModified state 2024-02-19 01:11:29 +02:00
Carl-Robert Linnupuu
08cb81dabf refactor: openai settings form 2024-02-19 00:56:10 +02:00
Carl-Robert Linnupuu
d475ddb36f feat: support custom openai model configuration 2024-02-19 00:46:28 +02:00
Carl-Robert Linnupuu
4ed74a31c1 feat: second set of autocomplete improvements
- support typing as suggested functionality
- do not fetch completions on cursor change
- other minor fixes
2024-02-11 01:31:34 +02:00
PhilKes
056276d626 fix: Skip AbstractCredentialsManager.setCredential if credential is null 2024-02-09 01:37:08 +02:00
Carl-Robert Linnupuu
e831213509 fix: code completion cancelling 2024-02-08 01:58:15 +02:00
Carl-Robert Linnupuu
1a7e302ae2 fix: decrease prefix/suffix prompt size 2024-02-08 01:57:50 +02:00
Carl-Robert Linnupuu
5ea3609a92 fix: build caused by recent merge 2024-02-08 01:08:28 +02:00
Carl-Robert
93145098f5
feat: settings and credentials refactoring (#360)
* refactor service credential managers

* refactor azure settings

* refactor openai settings

* refactor llama settings

* refactor you settings

* refactor included files settings

* refactor general settings

* refactor advanced settings

* fix advanced settings component init

* refactor project structure

* refactor service settings forms

* remove openai quota exceeded field validator

* fix credential modified conditions

* fix and rearrange minor stuff

* fix you auth logic, add credential cache
2024-02-08 01:02:08 +02:00
squall
7c067d9edd
feat: remote server, add template suport for DeepSeek Coder (#352)
* feat: remote server, add template suport for DeepSeek Coder

* fix checkstyle error
2024-02-08 00:56:01 +02:00
Carl-Robert Linnupuu
097f0914bf refactor: extract configuration state into standalone class 2024-02-07 02:13:22 +02:00
Carl-Robert Linnupuu
d0132c6c34 refactor: clean up unused configuration 2024-02-07 00:49:16 +02:00
Carl-Robert Linnupuu
1aac1f1084 fix: code completion improvements 2024-02-07 00:47:13 +02:00
Carl-Robert Linnupuu
dfca391ed5 fix: revert code completion feature toggle dumbaware actions 2024-02-07 00:45:52 +02:00
Carl-Robert Linnupuu
df14b88617 feat: add the latest OpenAI chat models 2024-02-06 18:49:30 +02:00
Carl-Robert Linnupuu
fe4e02f7f6 Revert "Revert "feat: code completion improvements""
This reverts commit 7f586da0c1.
2024-02-06 02:18:53 +02:00
Carl-Robert Linnupuu
7f586da0c1 Revert "feat: code completion improvements"
This reverts commit abc8dc8d07.
2024-02-05 16:28:18 +02:00
Carl-Robert Linnupuu
abc8dc8d07 feat: code completion improvements
- truncate context when working with bigger files
- fix notification error messages
- other minor fixes
2024-02-05 15:59:49 +02:00
Phil
cceba88c35
Allow using existing Llama Server instead of running locally (#345)
* Add setting to use existing Llama server

* minor UI improvements

* support infill template configuration

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-02-02 12:24:41 +02:00
Phil
7387cf4536
Inline Autocompletion Pt.2 (#333)
* Add first draft of inline code completion with mock text

* Adds InsertInlineTextAction for inserting autocomplete suggestion with tab

- Changed to disable suggestions when text is selected
- Adds and removes the insert action based on when it shows the inlay hint

* Request inline code completion

* Move inline completion prompt into txt file

* Add inline completion settings to ConfigurationState

* Fix code style

* Use EditorTrackerListener instead of EditorFactoryListener to enable inline completion

* Code completion requests synchronously without SSE

* Use LlamaClient.getInfill() for inline code completion

* support inlay block element rendering, clean up code

* Use only enclosed Method or Class contents for code completion if possible

* Refactor extracting PsiElement contents in code completion

* bump llm-client

* fix completion call from triggering on EDT, force method params to be nonnull by default

* refactor request building, decrease delay value

* Trigger code completion if cursor is not inside a word

* Improve inlay rendering

* Support cancellable infill requests

* add statusbar widget, disable completions by default

* Show error notification if code completion failed

* Truely disable/enable EditorInlayHandler when completion is turned off/on

* Add CodeCompletionEnabledListener Topic to control enabling/disabling code-completion

* Add progress indicator for code-completion with option to cancel

* Add CodeCompletionServiceTest + refactor inlay ElementRenderers

* several improvements

- replace timer implementation with call debouncing
- use OpenAI /v1/completions API for completions
- code refactoring

* trigger progress indicator only for llama completions

* fix tests

---------

Co-authored-by: James Higgins <james.isaac.higgins@gmail.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-01-31 01:05:31 +02:00
Phil
390d8cdd5e
Add setting for custom LLama server executable (#344) 2024-01-30 11:22:22 +02:00
Carl-Robert
f831a1facd
feat: add support for auto resolving compilation errors (#318) 2023-12-29 16:41:47 +02:00
Carl-Robert Linnupuu
e230640063 feat: extract llama request settings to its own state, improve UI/UX 2023-12-21 14:46:45 +02:00
Aliet Expósito García
9d83107dd5
Add support for some extended parameters of llama.cpp(top_k, top_p, min_p, and repeat_penalty) (#311)
* Add support for some extended parameters of llama.cpp(top_k, top_p, min_p, and repeat_penalty)

Added 'top_k,' 'top_p,' 'min_p,' and 'repeat_penalty' fields to the llama.cpp request configuration. The default values for these fields match the defaults of llama.cpp. If left untouched, they do not affect the model's response to the request.

* Bump llm-client

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2023-12-18 11:53:23 +02:00
Carl-Robert Linnupuu
06895364ba feat: throw an error when too many files selected (temp) 2023-12-14 10:17:28 +02:00
Carl-Robert Linnupuu
6824fbeb3b feat: replace editor pane with action links 2023-12-13 16:52:35 +02:00
Carl-Robert Linnupuu
ee4d1e8da6 feat: improve multi-file selection dialog UI 2023-12-13 14:52:59 +02:00
Carl-Robert Linnupuu
56c69f5eeb feat: allow commit message and method name generation with Azure service 2023-12-12 22:46:16 +02:00