Commit graph

19 commits

Author SHA1 Message Date
Phil
7387cf4536
Inline Autocompletion Pt.2 (#333)
* Add first draft of inline code completion with mock text

* Adds InsertInlineTextAction for inserting autocomplete suggestion with tab

- Changed to disable suggestions when text is selected
- Adds and removes the insert action based on when it shows the inlay hint

* Request inline code completion

* Move inline completion prompt into txt file

* Add inline completion settings to ConfigurationState

* Fix code style

* Use EditorTrackerListener instead of EditorFactoryListener to enable inline completion

* Code completion requests synchronously without SSE

* Use LlamaClient.getInfill() for inline code completion

* support inlay block element rendering, clean up code

* Use only enclosed Method or Class contents for code completion if possible

* Refactor extracting PsiElement contents in code completion

* bump llm-client

* fix completion call from triggering on EDT, force method params to be nonnull by default

* refactor request building, decrease delay value

* Trigger code completion if cursor is not inside a word

* Improve inlay rendering

* Support cancellable infill requests

* add statusbar widget, disable completions by default

* Show error notification if code completion failed

* Truely disable/enable EditorInlayHandler when completion is turned off/on

* Add CodeCompletionEnabledListener Topic to control enabling/disabling code-completion

* Add progress indicator for code-completion with option to cancel

* Add CodeCompletionServiceTest + refactor inlay ElementRenderers

* several improvements

- replace timer implementation with call debouncing
- use OpenAI /v1/completions API for completions
- code refactoring

* trigger progress indicator only for llama completions

* fix tests

---------

Co-authored-by: James Higgins <james.isaac.higgins@gmail.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-01-31 01:05:31 +02:00
Carl-Robert Linnupuu
e230640063 feat: extract llama request settings to its own state, improve UI/UX 2023-12-21 14:46:45 +02:00
Aliet Expósito García
9d83107dd5
Add support for some extended parameters of llama.cpp(top_k, top_p, min_p, and repeat_penalty) (#311)
* Add support for some extended parameters of llama.cpp(top_k, top_p, min_p, and repeat_penalty)

Added 'top_k,' 'top_p,' 'min_p,' and 'repeat_penalty' fields to the llama.cpp request configuration. The default values for these fields match the defaults of llama.cpp. If left untouched, they do not affect the model's response to the request.

* Bump llm-client

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2023-12-18 11:53:23 +02:00
Carl-Robert
f4be25bdac
Feature: Support chatting with multiple files (#306)
* Initial implementation

* Refactor UI related classes and organize imports

* Display selected files notification, include the files in the prompt

* feat: store referenced file paths in the messate state

* feat: add selected files accordion

* feat: update UI

* feat: improve file selection

* feat: support prompt template configuration

* fix: token calculation for virtualfile checkbox tree

* refactor: clean up

* refactor: move labels/descriptions to bundle
2023-12-12 22:30:39 +02:00
René
c214b59f55
adds: configuration for the commit-message system prompt (#304)
* adds: configuration for the commit-message system prompt

this will remove the default file and move it to the code to be overwritten if the user chooses to modify the prompt.

* fix: checkstyle

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2023-12-09 14:48:10 +02:00
Carl-Robert Linnupuu
1392775940 feat: display notification on plugin updates 2023-12-02 01:14:37 +02:00
Carl-Robert
ae7f5d17db
262 - Support auto code formatting (#292) 2023-11-27 01:24:02 +02:00
Carl-Robert Linnupuu
53bdbcd4f5 Remove Quartz Scheduler, You.com model change topic, theme utils, and include other basic refactoring 2023-11-21 22:47:09 +02:00
Carl-Robert Linnupuu
85eafadb47 Replace label 2023-11-20 15:47:50 +02:00
Carl-Robert
845c7b4cee
Support method name lookup generation (#280) 2023-11-19 22:56:12 +02:00
Carl-Robert
c4115e257b
Add checkstyle rules (#274) 2023-11-16 17:15:11 +02:00
Carl-Robert Linnupuu
ec3120a5e6 Add interactive total token count label, codebase refactoring 2023-11-14 13:27:15 +02:00
Carl-Robert Linnupuu
14acc5b09f Remove Azure model selection and max completion token limit 2023-11-09 20:31:19 +02:00
Carl-Robert
45908e69df
#178 - Add support for running local LLMs via LLaMA C/C++ port (#249)
* Initial implementation of integrating llama.cpp to run LLaMA models locally

* Move submodule

* Copy llama submodule to bundle

* Support for downloading models from IDE

* Code cleanup

* Store port field

* Replace service selection radio group with dropdown

* Add quantization support + other fixes

* Add option to override host

* Fix override host handler

* Disable port field when override host enabled

* Design updates

* Fix llama settings configuration, design changes, clean up code

* Improve You.com coupon design

* Add new Phind model and help tooltip

* Fetch you.com subscription

* Add CodeBooga model, fix downloadable model selection

* Chat history support

* Code refactoring, minor bug fixes

* UI updates, several bug fixes, removed code llama python model

* Code cleanup, enable llama port only on macOS

* Change downloaded gguf models path

* Move some of the labels to codegpt bundle

* Minor fixes

* Remove ToRA model, add help texts

* Fix test

* Modify description
2023-11-03 12:00:24 +02:00
Carl-Robert
37af74ebdf
You API integration (#203)
* Ability to configure custom service

* Add example preset templates, rename module

* Custom service client impl

* Add YOU API integration

* Remove/ignore generated antlr classes

* Remove text completion models(deprecated)

* Remove unused code, fix settings state sync

* Display model name/icon in the tool window

* Update chat history UI

* Fix model/service sync

* Clear plugin state

* Fix minor bugs, add settings sync tests

* UI changes

* Separate model configuration

* Add support for overriding the completion path

* Update Find Bugs prompt
2023-09-14 14:52:18 +03:00
Carl-Robert
ef5fd5919f
Encapsulate settings (#180) 2023-08-27 18:16:08 +03:00
Carl-Robert Linnupuu
26a3e07360 Reopen plugin's source code (1.10.8 → 2.0.5) 2023-08-25 16:36:22 +03:00
Carl-Robert Linnupuu
2cd4854cb6 Chat tabs improvements (#54) 2023-03-26 13:19:23 +01:00
Carl-Robert Linnupuu
57e1095dd1 Switch to openai-client, add conversation history empty label, remove unofficial reverse proxy (closes #43) 2023-03-21 00:27:27 +00:00
Renamed from src/main/java/ee/carlrobert/codegpt/ide/settings/configuration/ConfigurationComponent.java (Browse further)