koboldcpp/examples/llava
Concedo f144b1f345 Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.devops/llama-cpp-cuda.srpm.spec
#	.devops/llama-cpp.srpm.spec
#	.devops/nix/package.nix
#	.devops/rocm.Dockerfile
#	.github/ISSUE_TEMPLATE/020-enhancement.yml
#	.github/ISSUE_TEMPLATE/030-research.yml
#	.github/ISSUE_TEMPLATE/040-refactor.yml
#	.github/ISSUE_TEMPLATE/config.yml
#	.github/pull_request_template.md
#	.github/workflows/bench.yml.disabled
#	.github/workflows/build.yml
#	.github/workflows/labeler.yml
#	CONTRIBUTING.md
#	Makefile
#	README.md
#	SECURITY.md
#	ci/README.md
#	common/CMakeLists.txt
#	docs/android.md
#	docs/backend/SYCL.md
#	docs/build.md
#	docs/cuda-fedora.md
#	docs/development/HOWTO-add-model.md
#	docs/docker.md
#	docs/install.md
#	docs/llguidance.md
#	examples/cvector-generator/README.md
#	examples/imatrix/README.md
#	examples/imatrix/imatrix.cpp
#	examples/llama.android/llama/src/main/cpp/CMakeLists.txt
#	examples/llama.swiftui/README.md
#	examples/llama.vim
#	examples/lookahead/README.md
#	examples/lookup/README.md
#	examples/main/README.md
#	examples/passkey/README.md
#	examples/pydantic_models_to_grammar_examples.py
#	examples/retrieval/README.md
#	examples/server/CMakeLists.txt
#	examples/server/README.md
#	examples/simple-cmake-pkg/README.md
#	examples/speculative/README.md
#	flake.nix
#	grammars/README.md
#	pyproject.toml
#	scripts/check-requirements.sh
2025-02-16 02:08:39 +08:00
..
clip-quantize-cli.cpp llava: add quantization for the visual projector LLAVA, Qwen2VL (#11644) 2025-02-05 10:45:40 +03:00
clip.cpp Merge branch 'upstream' into concedo_experimental 2025-02-07 00:52:31 +08:00
clip.h Merge branch 'upstream' into concedo_experimental 2025-02-07 00:52:31 +08:00
convert_image_encoder_to_gguf.py ci : reduce severity of unused Pyright ignore comments (#9697) 2024-09-30 14:13:16 -04:00
glmedge-convert-image-encoder-to-gguf.py llama : add support for GLM-Edge and GLM-Edge-V series models (#10573) 2025-02-02 09:48:46 +02:00
glmedge-surgery.py llama : add support for GLM-Edge and GLM-Edge-V series models (#10573) 2025-02-02 09:48:46 +02:00
llava-cli.cpp llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llava.cpp Merge branch 'upstream' into concedo_experimental 2025-02-07 00:52:31 +08:00
llava.h llava : support MiniCPM-V-2.5 (#7599) 2024-08-09 13:33:53 +03:00
llava_surgery.py py : switch to snake_case (#8305) 2024-07-05 07:53:33 +03:00
llava_surgery_v2.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
minicpmv-cli.cpp llava : support Minicpm-omni (#11289) 2025-01-22 09:35:48 +02:00
minicpmv-convert-image-encoder-to-gguf.py llava : support Minicpm-omni (#11289) 2025-01-22 09:35:48 +02:00
minicpmv-surgery.py llava : support Minicpm-omni (#11289) 2025-01-22 09:35:48 +02:00
quantclip.cpp better quant clip 2024-08-18 22:15:59 +08:00
qwen2_vl_surgery.py llava : Allow locally downloaded models for QwenVL (#10833) 2024-12-15 21:43:25 +01:00
qwen2vl-cli.cpp llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
README-glmedge.md llama : add support for GLM-Edge and GLM-Edge-V series models (#10573) 2025-02-02 09:48:46 +02:00
README-minicpmo2.6.md repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
README-minicpmv2.5.md repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
README-minicpmv2.6.md llava : support MiniCPM-V-2.6 (#8967) 2024-08-16 16:34:41 +03:00
README-quantize.md llava: add quantization for the visual projector LLAVA, Qwen2VL (#11644) 2025-02-05 10:45:40 +03:00
requirements.txt py : fix requirements check '==' -> '~=' (#8982) 2024-08-12 11:02:01 +03:00