Commit graph

164 commits

Author SHA1 Message Date
Carl-Robert Linnupuu
1aac1f1084 fix: code completion improvements 2024-02-07 00:47:13 +02:00
Carl-Robert Linnupuu
dfca391ed5 fix: revert code completion feature toggle dumbaware actions 2024-02-07 00:45:52 +02:00
Carl-Robert Linnupuu
df14b88617 feat: add the latest OpenAI chat models 2024-02-06 18:49:30 +02:00
Carl-Robert Linnupuu
fe4e02f7f6 Revert "Revert "feat: code completion improvements""
This reverts commit 7f586da0c1.
2024-02-06 02:18:53 +02:00
Carl-Robert Linnupuu
7f586da0c1 Revert "feat: code completion improvements"
This reverts commit abc8dc8d07.
2024-02-05 16:28:18 +02:00
Carl-Robert Linnupuu
abc8dc8d07 feat: code completion improvements
- truncate context when working with bigger files
- fix notification error messages
- other minor fixes
2024-02-05 15:59:49 +02:00
Phil
cceba88c35
Allow using existing Llama Server instead of running locally (#345)
* Add setting to use existing Llama server

* minor UI improvements

* support infill template configuration

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-02-02 12:24:41 +02:00
Phil
7387cf4536
Inline Autocompletion Pt.2 (#333)
* Add first draft of inline code completion with mock text

* Adds InsertInlineTextAction for inserting autocomplete suggestion with tab

- Changed to disable suggestions when text is selected
- Adds and removes the insert action based on when it shows the inlay hint

* Request inline code completion

* Move inline completion prompt into txt file

* Add inline completion settings to ConfigurationState

* Fix code style

* Use EditorTrackerListener instead of EditorFactoryListener to enable inline completion

* Code completion requests synchronously without SSE

* Use LlamaClient.getInfill() for inline code completion

* support inlay block element rendering, clean up code

* Use only enclosed Method or Class contents for code completion if possible

* Refactor extracting PsiElement contents in code completion

* bump llm-client

* fix completion call from triggering on EDT, force method params to be nonnull by default

* refactor request building, decrease delay value

* Trigger code completion if cursor is not inside a word

* Improve inlay rendering

* Support cancellable infill requests

* add statusbar widget, disable completions by default

* Show error notification if code completion failed

* Truely disable/enable EditorInlayHandler when completion is turned off/on

* Add CodeCompletionEnabledListener Topic to control enabling/disabling code-completion

* Add progress indicator for code-completion with option to cancel

* Add CodeCompletionServiceTest + refactor inlay ElementRenderers

* several improvements

- replace timer implementation with call debouncing
- use OpenAI /v1/completions API for completions
- code refactoring

* trigger progress indicator only for llama completions

* fix tests

---------

Co-authored-by: James Higgins <james.isaac.higgins@gmail.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-01-31 01:05:31 +02:00
Phil
390d8cdd5e
Add setting for custom LLama server executable (#344) 2024-01-30 11:22:22 +02:00
Carl-Robert
f831a1facd
feat: add support for auto resolving compilation errors (#318) 2023-12-29 16:41:47 +02:00
Carl-Robert Linnupuu
e230640063 feat: extract llama request settings to its own state, improve UI/UX 2023-12-21 14:46:45 +02:00
Aliet Expósito García
9d83107dd5
Add support for some extended parameters of llama.cpp(top_k, top_p, min_p, and repeat_penalty) (#311)
* Add support for some extended parameters of llama.cpp(top_k, top_p, min_p, and repeat_penalty)

Added 'top_k,' 'top_p,' 'min_p,' and 'repeat_penalty' fields to the llama.cpp request configuration. The default values for these fields match the defaults of llama.cpp. If left untouched, they do not affect the model's response to the request.

* Bump llm-client

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2023-12-18 11:53:23 +02:00
Carl-Robert Linnupuu
06895364ba feat: throw an error when too many files selected (temp) 2023-12-14 10:17:28 +02:00
Carl-Robert Linnupuu
6824fbeb3b feat: replace editor pane with action links 2023-12-13 16:52:35 +02:00
Carl-Robert Linnupuu
ee4d1e8da6 feat: improve multi-file selection dialog UI 2023-12-13 14:52:59 +02:00
Carl-Robert Linnupuu
56c69f5eeb feat: allow commit message and method name generation with Azure service 2023-12-12 22:46:16 +02:00
Carl-Robert
f4be25bdac
Feature: Support chatting with multiple files (#306)
* Initial implementation

* Refactor UI related classes and organize imports

* Display selected files notification, include the files in the prompt

* feat: store referenced file paths in the messate state

* feat: add selected files accordion

* feat: update UI

* feat: improve file selection

* feat: support prompt template configuration

* fix: token calculation for virtualfile checkbox tree

* refactor: clean up

* refactor: move labels/descriptions to bundle
2023-12-12 22:30:39 +02:00
René
c214b59f55
adds: configuration for the commit-message system prompt (#304)
* adds: configuration for the commit-message system prompt

this will remove the default file and move it to the code to be overwritten if the user chooses to modify the prompt.

* fix: checkstyle

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2023-12-09 14:48:10 +02:00
Carl-Robert Linnupuu
c36d4dd566 fix: redundant chat tab creation on provider change 2023-12-08 03:05:01 +02:00
Carl-Robert Linnupuu
cfe89fccb7 refactor: remove you.com coupon 2023-12-08 02:32:49 +02:00
Carl-Robert Linnupuu
ffb8299571 fix: azure host and path overriding 2023-12-08 01:03:41 +02:00
Carl-Robert Linnupuu
425b0cd58b refactor: improve llm-client code modularity 2023-12-07 21:48:12 +02:00
Carl-Robert Linnupuu
06ad159adf fix: JetBrains internal API usage warnings 2023-12-04 21:56:51 +02:00
Carl-Robert Linnupuu
46b88a4952 fix: settings state on server failure 2023-12-03 18:43:20 +02:00
Carl-Robert Linnupuu
0e61bee0f8 feat: improve llama server logging 2023-12-03 18:10:39 +02:00
Carl-Robert Linnupuu
1392775940 feat: display notification on plugin updates 2023-12-02 01:14:37 +02:00
Carl-Robert Linnupuu
dc2cc3c5a1 fix: UI concurrency issues (run completion events on EDT) 2023-12-01 00:32:52 +02:00
Viktor
92dbbb4a4d
Local LLM: Added empty check for Additional parameters field (#295)
Co-authored-by: Viktor <viktor.hoshyi@gg4l.com>
2023-11-28 20:15:22 +02:00
Carl-Robert
ae7f5d17db
262 - Support auto code formatting (#292) 2023-11-27 01:24:02 +02:00
Carl-Robert
2372eec3cf
285 - Include actual user selected files in the diff (#291) 2023-11-27 00:28:39 +02:00
Carl-Robert Linnupuu
8d4189c503 Minor clean up 2023-11-26 22:03:49 +02:00
Carl-Robert Linnupuu
01963e2faa Support additional command-line params for the server startup process 2023-11-26 13:21:02 +02:00
Carl-Robert
1df20ccb86
Update toolwindow UI (#290) 2023-11-26 10:52:47 +02:00
Carl-Robert Linnupuu
3797126de4 Add deepseek coder instruct models (1-33B) 2023-11-23 17:22:48 +02:00
Viktor
1acb950c33
Local LLM: Use OSProcessHandler.Silent intead of OSProcessHandler to prevent server process killing (#287)
Co-authored-by: Viktor <viktor.hoshyi@gg4l.com>
2023-11-23 00:10:33 +02:00
Carl-Robert Linnupuu
ac39a863d1 Fix chat response JTextPane caret visibility 2023-11-22 02:10:48 +02:00
Carl-Robert
2317b54d56
272 - Fix editor actions when createNewChatOnEachAction cfg is turned on (#283) 2023-11-22 00:22:36 +02:00
Carl-Robert Linnupuu
ecf6ac02ed Disable tool window chat editor when initially displayed 2023-11-21 23:09:31 +02:00
Carl-Robert Linnupuu
53bdbcd4f5 Remove Quartz Scheduler, You.com model change topic, theme utils, and include other basic refactoring 2023-11-21 22:47:09 +02:00
Viktor
73870cca40
Added option to set the number of threads for local LLM models (#282)
* Added option to set the number of threads for local LLM models

* Refactoring

---------

Co-authored-by: Viktor <viktor.hoshyi@gg4l.com>
2023-11-21 00:36:19 +02:00
Carl-Robert Linnupuu
85eafadb47 Replace label 2023-11-20 15:47:50 +02:00
Carl-Robert
5c8a7c0e2b
Include default parameters for llama client (#281) 2023-11-20 00:07:31 +02:00
Carl-Robert
845c7b4cee
Support method name lookup generation (#280) 2023-11-19 22:56:12 +02:00
Carl-Robert
44e5aa79dd
Support git commit message generation (#276)
* Add git commit message generation feature using OpenAI service
2023-11-17 01:20:00 +02:00
Carl-Robert
c4115e257b
Add checkstyle rules (#274) 2023-11-16 17:15:11 +02:00
Carl-Robert Linnupuu
318dd4286a Fix minor issues related to total tokens calculation 2023-11-15 00:44:13 +02:00
Carl-Robert Linnupuu
346218b512 Clean up code 2023-11-14 16:20:59 +02:00
Carl-Robert Linnupuu
ec3120a5e6 Add interactive total token count label, codebase refactoring 2023-11-14 13:27:15 +02:00
Carl-Robert Linnupuu
d8e5e18998 Expand/Collapse logic for toolwindow editors 2023-11-10 15:06:22 +02:00
Carl-Robert Linnupuu
dea80fe8aa Fix model changed logic 2023-11-10 01:37:07 +02:00