Commit graph

20 commits

Author SHA1 Message Date
Carl-Robert Linnupuu
54fa78e7e6 fix: model value change for empty conversations 2024-12-17 11:06:28 +00:00
Carl-Robert Linnupuu
d5d03a53b1 refactor: improve chat completion call handling 2024-10-17 02:24:57 +03:00
Carl-Robert Linnupuu
8a7c84ae35 chore: remove You.com support 2024-06-24 17:48:27 +03:00
Rene Leonhardt
9bd7e6e83a
feat: Visualize downloaded models (#543)
* feat: Visualize downloaded models

* Simplify GeneralSettings access
2024-05-13 10:48:55 +03:00
Carl-Robert
0852c27170
feat: add CodeGPT "native" API provider (#537)
* feat: support codegpt client

* feat: add basic request handler test

* refactor: minor cleanup
2024-05-08 23:59:51 +03:00
Phil
74fc2e6219
feat: add Google Gemini API support (#535) 2024-05-08 16:51:32 +03:00
Jack Boswell
e40630d796
feat: Implement Ollama as a high-level service (#510)
* Initial implementation of Ollama as a service

* Fix model selector in tool window

* Enable image attachment

* Rewrite OllamaSettingsForm in Kt

* Create OllamaInlineCompletionModel and use it for building completion template

* Add support for blocking code completion on models that we don't know support it

* Allow disabling code completion settings

* Disable code completion settings when an unsupported model is entered

* Track FIM template in settings as a derived state

* Update llm-client

* Initial implementation of model combo box

* Add Ollama icon and display models as list

* Make OllamaSettingsState immutable & convert OllamaSettings to Kotlin

* Add refresh models button

* Distinguish between empty/needs refresh/loading

* Avoid storing any model if the combo box is empty

* Fix icon size

* Back to mutable settings
There were some bugs with immutable settings

* Store available models in settings state

* Expose available models in model dropdown

* Add dark icon

* Cleanups for CompletionRequestProvider

* Fix checkstyle issues

* refactor: migrate to SimplePersistentStateComponent

* fix: add code completion stop tokens

* fix: display only one item in the model popup action group

* fix: add back multi model selection

---------

Co-authored-by: Carl-Robert Linnupuu <carlrobertoh@gmail.com>
2024-05-08 01:11:13 +03:00
Rene Leonhardt
7d89650062
chore: Improve code (#442)
* chore: Improve code

* Convert classes to records
2024-04-10 14:47:38 +03:00
Carl-Robert
9706a357d2
feat: support claude completions (#398) 2024-03-06 12:48:29 +02:00
Carl-Robert
8507c779b1
feat: support custom OpenAI-compatible service (#383) 2024-02-23 17:41:44 +02:00
Carl-Robert
93145098f5
feat: settings and credentials refactoring (#360)
* refactor service credential managers

* refactor azure settings

* refactor openai settings

* refactor llama settings

* refactor you settings

* refactor included files settings

* refactor general settings

* refactor advanced settings

* fix advanced settings component init

* refactor project structure

* refactor service settings forms

* remove openai quota exceeded field validator

* fix credential modified conditions

* fix and rearrange minor stuff

* fix you auth logic, add credential cache
2024-02-08 01:02:08 +02:00
Carl-Robert
f831a1facd
feat: add support for auto resolving compilation errors (#318) 2023-12-29 16:41:47 +02:00
Carl-Robert
c4115e257b
Add checkstyle rules (#274) 2023-11-16 17:15:11 +02:00
Carl-Robert Linnupuu
318dd4286a Fix minor issues related to total tokens calculation 2023-11-15 00:44:13 +02:00
Carl-Robert Linnupuu
14acc5b09f Remove Azure model selection and max completion token limit 2023-11-09 20:31:19 +02:00
Carl-Robert
cfa5ff7776
Use enum value to store selected service (#265) 2023-11-08 19:17:25 +02:00
Carl-Robert
45908e69df
#178 - Add support for running local LLMs via LLaMA C/C++ port (#249)
* Initial implementation of integrating llama.cpp to run LLaMA models locally

* Move submodule

* Copy llama submodule to bundle

* Support for downloading models from IDE

* Code cleanup

* Store port field

* Replace service selection radio group with dropdown

* Add quantization support + other fixes

* Add option to override host

* Fix override host handler

* Disable port field when override host enabled

* Design updates

* Fix llama settings configuration, design changes, clean up code

* Improve You.com coupon design

* Add new Phind model and help tooltip

* Fetch you.com subscription

* Add CodeBooga model, fix downloadable model selection

* Chat history support

* Code refactoring, minor bug fixes

* UI updates, several bug fixes, removed code llama python model

* Code cleanup, enable llama port only on macOS

* Change downloaded gguf models path

* Move some of the labels to codegpt bundle

* Minor fixes

* Remove ToRA model, add help texts

* Fix test

* Modify description
2023-11-03 12:00:24 +02:00
Carl-Robert
37af74ebdf
You API integration (#203)
* Ability to configure custom service

* Add example preset templates, rename module

* Custom service client impl

* Add YOU API integration

* Remove/ignore generated antlr classes

* Remove text completion models(deprecated)

* Remove unused code, fix settings state sync

* Display model name/icon in the tool window

* Update chat history UI

* Fix model/service sync

* Clear plugin state

* Fix minor bugs, add settings sync tests

* UI changes

* Separate model configuration

* Add support for overriding the completion path

* Update Find Bugs prompt
2023-09-14 14:52:18 +03:00
Carl-Robert
ef5fd5919f
Encapsulate settings (#180) 2023-08-27 18:16:08 +03:00
Carl-Robert Linnupuu
26a3e07360 Reopen plugin's source code (1.10.8 → 2.0.5) 2023-08-25 16:36:22 +03:00
Renamed from src/main/java/ee/carlrobert/codegpt/state/conversations/ConversationsState.java (Browse further)