Commit graph

69 commits

Author SHA1 Message Date
Carl-Robert Linnupuu
7dc610d126 refactor: clean up unused messages 2024-10-29 00:19:23 +00:00
Carl-Robert Linnupuu
dfa551806b feat: additional validation for Auto Apply action 2024-10-28 23:59:51 +00:00
Carl-Robert Linnupuu
6fbea7d4b8
feat: auto apply (#743) 2024-10-28 16:33:50 +00:00
Carl-Robert Linnupuu
d0e74d43a2 refactor: improve project directory handling for git repository lookup 2024-10-16 15:50:02 +03:00
Ruslans Tarasovs
d336b9ec8b
feat: support of parsing Custom OpenAI response as Code Completions (#727) 2024-10-07 21:42:44 +03:00
Carl-Robert Linnupuu
94d0bcd0a0 feat: support quick way of including git commit diffs in the prompt (closes #688) 2024-09-11 12:31:38 +03:00
Carl-Robert Linnupuu
c417ccadac feat: new 'Insert at Caret' toolwindow editor action 2024-09-05 01:28:42 +03:00
Carl-Robert Linnupuu
4898c8580c feat: add apply and diff actions for toolwindow code editor 2024-09-04 18:02:41 +03:00
Carl-Robert Linnupuu
2ce05a50af feat: add git context to code completions 2024-08-31 19:39:43 +03:00
Carl-Robert Linnupuu
d672d28474 feat: display web docs progress 2024-08-23 11:26:15 +03:00
Carl-Robert Linnupuu
c6e4d5fd7c feat: add default docs and other minor improvements 2024-08-14 00:12:16 +03:00
Carl-Robert
b4ef573be2
feat: add webpage documentation support (#650)
* feat: documentation support while chatting

* feat: support managing web documentation entries
2024-08-13 13:44:40 +03:00
Carl-Robert
05f146c405
feat: web search support (#641)
* feat: web search support

* fix: enable web search only for codegpt provider

* fix: checkstyle

* feat: improve list cell design
2024-07-30 15:53:45 +03:00
Carl-Robert Linnupuu
f85db97c40 feat: display popup close help text 2024-07-26 12:41:07 +03:00
Carl-Robert
d68b356b42
feat: improved popup suggestions and personas support (#638)
* feat: support personas

* fix: replace previous system prompts with personas

* feat: add persona toolbar label

* refactor: rename properties

* refactor: clean up

* fix: personas settings configurable state

* refactor: code cleanup

* feat: list item auto highlightning

* feat: replace personas toolbar label with action link

* refactor: code cleanup

* fix: manual items not being able to delete

* fix: personas settings configurable state

* refactor: clean up code

* fix: folder selection
2024-07-25 23:50:31 +03:00
Carl-Robert
1fc47fa889
feat: improve tool window's textbox (#621)
* feat: initial smart user input panel implementation

* refactor: clean up
2024-07-18 14:18:51 +03:00
Phil
620226ff1d
feat: add project context to code completions (#571)
* feat: add context to code completions

* feat: context finder for Python

* feat: improve and refactor context finder for Python

* feat: include method calls in JavaContextFinder

* test: add JavaContextFinder tests

* test: add PythonContextFinder tests

* fix: CompletionContextService thread

* fix: InfillPromptTemplate context files string

* refactor: simplify findRelevantElements for Java and Python

* feat: only add code snippets instead of files for code-completion context

* feat: add default multi-file prompt template

* fix: add Codestral multi-file FIM

* feat: add feature flag for context aware code completions

* feat: truncate project context elements for code completion
2024-07-03 17:38:03 +03:00
Carl-Robert
14a0d4085c
feat: fast code edits (#601)
* feat: initial implementation of direct code edits

* fix: popup model selection

* refactor: simplify code replacement logic

* feat: interactive code modifications

* refactor: remove junk
2024-06-30 00:39:52 +03:00
Carl-Robert Linnupuu
8a7c84ae35 chore: remove You.com support 2024-06-24 17:48:27 +03:00
PhilKes
2fcc7bd38d feat: re-select Ollama model after refresh if available otherwise show error 2024-06-17 19:32:06 +03:00
PhilKes
4586838610 feat: optional apiKey field for Ollama service 2024-06-17 19:31:56 +03:00
Carl-Robert Linnupuu
3a4208c507 fix: replace codegpt website base url 2024-06-10 12:04:44 +03:00
Phil
08b592f7e8
feat: add field for environment variables for Llama server (#550)
Co-authored-by: Carl-Robert <carlrobertoh@gmail.com>
2024-05-23 12:55:51 +03:00
Rene Leonhardt
8e5ba8158d
feat: Show server name in start/stop notifications (#546)
* feat: Show server name in start/stop notifications

* feat: Show opposite action in notification

* feat: Pre-select biggest downloaded parameter size on model change

* chore: Update to latest llama.cpp fixes (2024-05-14)
2024-05-14 21:26:22 +03:00
Rene Leonhardt
7c668ae143
feat: Start/stop LLaMA Server from statusbar (#544) 2024-05-13 19:02:22 +03:00
Carl-Robert Linnupuu
014f26f802 refactor: remove max_tokens configuration and other minor fixes 2024-05-13 15:32:20 +03:00
Carl-Robert
7bee59a90e
feat: extract providers into their standalone configurables (#538)
* fix: extract services to their own configurables

* feat: switch to selected provider automatically upon apply

* fix: credentials loading at once

* fix: rename llama.cpp title
2024-05-09 11:16:09 +03:00
Carl-Robert
0852c27170
feat: add CodeGPT "native" API provider (#537)
* feat: support codegpt client

* feat: add basic request handler test

* refactor: minor cleanup
2024-05-08 23:59:51 +03:00
Phil
74fc2e6219
feat: add Google Gemini API support (#535) 2024-05-08 16:51:32 +03:00
Jack Boswell
e40630d796
feat: Implement Ollama as a high-level service (#510)
* Initial implementation of Ollama as a service

* Fix model selector in tool window

* Enable image attachment

* Rewrite OllamaSettingsForm in Kt

* Create OllamaInlineCompletionModel and use it for building completion template

* Add support for blocking code completion on models that we don't know support it

* Allow disabling code completion settings

* Disable code completion settings when an unsupported model is entered

* Track FIM template in settings as a derived state

* Update llm-client

* Initial implementation of model combo box

* Add Ollama icon and display models as list

* Make OllamaSettingsState immutable & convert OllamaSettings to Kotlin

* Add refresh models button

* Distinguish between empty/needs refresh/loading

* Avoid storing any model if the combo box is empty

* Fix icon size

* Back to mutable settings
There were some bugs with immutable settings

* Store available models in settings state

* Expose available models in model dropdown

* Add dark icon

* Cleanups for CompletionRequestProvider

* Fix checkstyle issues

* refactor: migrate to SimplePersistentStateComponent

* fix: add code completion stop tokens

* fix: display only one item in the model popup action group

* fix: add back multi model selection

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-05-08 01:11:13 +03:00
Simon Svensson
14f3254913
feat: code completion for "Custom OpenAI Service" (#476)
* Add code completion setting states for custom service

* Add settings for code completion in Custom OpenAI service

* Move code completion section to the bottom

* Create test testFetchCodeCompletionCustomService

* Add Custom OpenAI to the "Enable/Disable Completion" actions

* New configuration UI separating /v1/chat/completions from /v1/completions

* Code completion for Custom Service

* Formatting fixes

* Move prefix and suffix to templates in body

* Message updates

* New tabbed UI for Chat and Code Completions

* convert to kotlin, improve ui and other minor changes

* fix test connection for chat completions

* add help tooltips

* allow backward compatibility

* support prefix and suffix placeholders

* fix initial state loading

---------

Co-authored-by: Jack Boswell (boswelja) <boswelja@outlook.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-04-20 23:23:08 +03:00
Phil
c8181a62e4
feat: add input field for llama server build parameters and improve error handling (#481) 2024-04-20 23:18:43 +03:00
Phil
9666590cb1
feat: add include file in context to editor context menu (#475)
* feat: add include file in context to editor context menu

* fix: custom title for IncludeFilesInContextAction in editor context menu
2024-04-18 18:49:04 +03:00
René
2221d72430
feat: add support for placeholders in prompts (#458)
* fixes #432 adds support for Placeholders in Prompts

- activate gradle plugin Git4Idea
- adds PlaceholderUtil
- adds DATE_ISO_8601 PlaceholderReplacer
- adds BRANCH_NAME PlaceholderReplacer

* convert to kotlin, improve ui and add int. test

* fix: do not reuse projects from previous test runs

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-04-17 11:41:21 +03:00
Carl-Robert Linnupuu
f0172722c7 feat: add support for configuring code completions via settings 2024-04-03 02:02:15 +03:00
Carl-Robert
8cf5720db9
feat: OpenAI and Claude vision support (#430)
* feat: add OpenAI and Claude vision support

* refactor: replace awaitility with PlatformTestUtil.waitWithEventsDispatching

* feat: display error when image not found

* chore: bump llm-client

* feat: configurable file watcher and minor code cleanup

* fix: ensure image notifications are triggered only for image file types

* docs: update changelog

* fix: user textarea icon button behaviour

* refactor: minor cleanup
2024-04-02 02:50:41 +03:00
Carl-Robert Linnupuu
8c986fd7de feat: support git commit message generation with custom openai and anthropic service (#390) 2024-03-12 21:27:51 +02:00
Carl-Robert
9706a357d2
feat: support claude completions (#398) 2024-03-06 12:48:29 +02:00
Carl-Robert
8507c779b1
feat: support custom OpenAI-compatible service (#383) 2024-02-23 17:41:44 +02:00
Carl-Robert Linnupuu
d475ddb36f feat: support custom openai model configuration 2024-02-19 00:46:28 +02:00
Carl-Robert Linnupuu
4ed74a31c1 feat: second set of autocomplete improvements
- support typing as suggested functionality
- do not fetch completions on cursor change
- other minor fixes
2024-02-11 01:31:34 +02:00
Phil
cceba88c35
Allow using existing Llama Server instead of running locally (#345)
* Add setting to use existing Llama server

* minor UI improvements

* support infill template configuration

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-02-02 12:24:41 +02:00
Phil
7387cf4536
Inline Autocompletion Pt.2 (#333)
* Add first draft of inline code completion with mock text

* Adds InsertInlineTextAction for inserting autocomplete suggestion with tab

- Changed to disable suggestions when text is selected
- Adds and removes the insert action based on when it shows the inlay hint

* Request inline code completion

* Move inline completion prompt into txt file

* Add inline completion settings to ConfigurationState

* Fix code style

* Use EditorTrackerListener instead of EditorFactoryListener to enable inline completion

* Code completion requests synchronously without SSE

* Use LlamaClient.getInfill() for inline code completion

* support inlay block element rendering, clean up code

* Use only enclosed Method or Class contents for code completion if possible

* Refactor extracting PsiElement contents in code completion

* bump llm-client

* fix completion call from triggering on EDT, force method params to be nonnull by default

* refactor request building, decrease delay value

* Trigger code completion if cursor is not inside a word

* Improve inlay rendering

* Support cancellable infill requests

* add statusbar widget, disable completions by default

* Show error notification if code completion failed

* Truely disable/enable EditorInlayHandler when completion is turned off/on

* Add CodeCompletionEnabledListener Topic to control enabling/disabling code-completion

* Add progress indicator for code-completion with option to cancel

* Add CodeCompletionServiceTest + refactor inlay ElementRenderers

* several improvements

- replace timer implementation with call debouncing
- use OpenAI /v1/completions API for completions
- code refactoring

* trigger progress indicator only for llama completions

* fix tests

---------

Co-authored-by: James Higgins <james.isaac.higgins@gmail.com>
Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-01-31 01:05:31 +02:00
Phil
390d8cdd5e
Add setting for custom LLama server executable (#344) 2024-01-30 11:22:22 +02:00
Carl-Robert
f831a1facd
feat: add support for auto resolving compilation errors (#318) 2023-12-29 16:41:47 +02:00
Carl-Robert Linnupuu
e230640063 feat: extract llama request settings to its own state, improve UI/UX 2023-12-21 14:46:45 +02:00
Aliet Expósito García
9d83107dd5
Add support for some extended parameters of llama.cpp(top_k, top_p, min_p, and repeat_penalty) (#311)
* Add support for some extended parameters of llama.cpp(top_k, top_p, min_p, and repeat_penalty)

Added 'top_k,' 'top_p,' 'min_p,' and 'repeat_penalty' fields to the llama.cpp request configuration. The default values for these fields match the defaults of llama.cpp. If left untouched, they do not affect the model's response to the request.

* Bump llm-client

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2023-12-18 11:53:23 +02:00
Carl-Robert Linnupuu
56c69f5eeb feat: allow commit message and method name generation with Azure service 2023-12-12 22:46:16 +02:00
Carl-Robert
f4be25bdac
Feature: Support chatting with multiple files (#306)
* Initial implementation

* Refactor UI related classes and organize imports

* Display selected files notification, include the files in the prompt

* feat: store referenced file paths in the messate state

* feat: add selected files accordion

* feat: update UI

* feat: improve file selection

* feat: support prompt template configuration

* fix: token calculation for virtualfile checkbox tree

* refactor: clean up

* refactor: move labels/descriptions to bundle
2023-12-12 22:30:39 +02:00
René
c214b59f55
adds: configuration for the commit-message system prompt (#304)
* adds: configuration for the commit-message system prompt

this will remove the default file and move it to the code to be overwritten if the user chooses to modify the prompt.

* fix: checkstyle

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2023-12-09 14:48:10 +02:00