koboldcpp/examples/llava
Concedo ec43d2b147 Merge branch 'upstream' into concedo_experimental
# Conflicts:
#	.github/workflows/build.yml
#	README.md
#	common/common.cpp
#	examples/embedding/embedding.cpp
#	examples/json_schema_to_grammar.py
#	examples/llama.android/llama/src/main/cpp/llama-android.cpp
#	examples/llama.swiftui/README.md
#	examples/llama.swiftui/llama.swiftui.xcodeproj/project.pbxproj
#	examples/lookahead/lookahead.cpp
#	examples/parallel/parallel.cpp
#	examples/passkey/passkey.cpp
#	ggml/CMakeLists.txt
#	ggml/src/CMakeLists.txt
#	ggml/src/ggml-cpu/CMakeLists.txt
#	requirements.txt
#	requirements/requirements-all.txt
#	scripts/fetch_server_test_models.py
#	tests/test-chat.cpp
#	tests/test-json-schema-to-grammar.cpp
2025-03-06 18:54:58 +08:00
..
clip-quantize-cli.cpp llava: add quantization for the visual projector LLAVA, Qwen2VL (#11644) 2025-02-05 10:45:40 +03:00
clip.cpp Rewrite history to fix bad vulkan shader commits without increasing repo size 2025-03-05 00:02:20 +08:00
clip.h Rewrite history to fix bad vulkan shader commits without increasing repo size 2025-03-05 00:02:20 +08:00
convert_image_encoder_to_gguf.py llava: add big-endian conversion for image encoder (#12218) 2025-03-06 09:33:21 +01:00
glmedge-convert-image-encoder-to-gguf.py llama : add support for GLM-Edge and GLM-Edge-V series models (#10573) 2025-02-02 09:48:46 +02:00
glmedge-surgery.py llama : add support for GLM-Edge and GLM-Edge-V series models (#10573) 2025-02-02 09:48:46 +02:00
llava-cli.cpp llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llava.cpp Rewrite history to fix bad vulkan shader commits without increasing repo size 2025-03-05 00:02:20 +08:00
llava.h llava : support MiniCPM-V-2.5 (#7599) 2024-08-09 13:33:53 +03:00
llava_surgery.py py : switch to snake_case (#8305) 2024-07-05 07:53:33 +03:00
llava_surgery_v2.py Rewrite history to fix bad vulkan shader commits without increasing repo size 2025-03-05 00:02:20 +08:00
minicpmv-cli.cpp llava : support Minicpm-omni (#11289) 2025-01-22 09:35:48 +02:00
minicpmv-convert-image-encoder-to-gguf.py llava : support Minicpm-omni (#11289) 2025-01-22 09:35:48 +02:00
minicpmv-surgery.py llava : support Minicpm-omni (#11289) 2025-01-22 09:35:48 +02:00
quantclip.cpp better quant clip 2024-08-18 22:15:59 +08:00
qwen2_vl_surgery.py llava : Allow locally downloaded models for QwenVL (#10833) 2024-12-15 21:43:25 +01:00
qwen2vl-cli.cpp llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
README-glmedge.md llama : add support for GLM-Edge and GLM-Edge-V series models (#10573) 2025-02-02 09:48:46 +02:00
README-granitevision.md Rewrite history to fix bad vulkan shader commits without increasing repo size 2025-03-05 00:02:20 +08:00
README-minicpmo2.6.md repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
README-minicpmv2.5.md repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
README-minicpmv2.6.md llava : support MiniCPM-V-2.6 (#8967) 2024-08-16 16:34:41 +03:00
README-quantize.md llava: add quantization for the visual projector LLAVA, Qwen2VL (#11644) 2025-02-05 10:45:40 +03:00
requirements.txt py : fix requirements check '==' -> '~=' (#8982) 2024-08-12 11:02:01 +03:00