#178 - Add support for running local LLMs via LLaMA C/C++ port (#249)

* Initial implementation of integrating llama.cpp to run LLaMA models locally

* Move submodule

* Copy llama submodule to bundle

* Support for downloading models from IDE

* Code cleanup

* Store port field

* Replace service selection radio group with dropdown

* Add quantization support + other fixes

* Add option to override host

* Fix override host handler

* Disable port field when override host enabled

* Design updates

* Fix llama settings configuration, design changes, clean up code

* Improve You.com coupon design

* Add new Phind model and help tooltip

* Fetch you.com subscription

* Add CodeBooga model, fix downloadable model selection

* Chat history support

* Code refactoring, minor bug fixes

* UI updates, several bug fixes, removed code llama python model

* Code cleanup, enable llama port only on macOS

* Change downloaded gguf models path

* Move some of the labels to codegpt bundle

* Minor fixes

* Remove ToRA model, add help texts

* Fix test

* Modify description
This commit is contained in:
Carl-Robert 2023-11-03 12:00:24 +02:00 committed by GitHub
parent ca2eb9b6fa
commit 45908e69df
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
71 changed files with 2748 additions and 533 deletions

View file

@ -180,10 +180,10 @@ public class ConfigurationComponent {
try {
var value = Double.parseDouble(valueText);
if (value > 1.0 || value < 0.0) {
return new ValidationInfo("Value must be between 0 and 1.", component);
return new ValidationInfo(CodeGPTBundle.get("validation.error.mustBeBetweenZeroAndOne"), component);
}
} catch (NumberFormatException e) {
return new ValidationInfo("Value must be number.", component);
return new ValidationInfo(CodeGPTBundle.get("validation.error.mustBeNumber"), component);
}
return null;