koboldcpp/examples/llava
Concedo dcfa1eca4e Merge commit '017cc5f446' into concedo_experimental
# Conflicts:
#	.github/ISSUE_TEMPLATE/010-bug-compilation.yml
#	.github/ISSUE_TEMPLATE/019-bug-misc.yml
#	CODEOWNERS
#	examples/batched-bench/batched-bench.cpp
#	examples/batched/batched.cpp
#	examples/convert-llama2c-to-ggml/convert-llama2c-to-ggml.cpp
#	examples/gritlm/gritlm.cpp
#	examples/llama-bench/llama-bench.cpp
#	examples/passkey/passkey.cpp
#	examples/quantize-stats/quantize-stats.cpp
#	examples/run/run.cpp
#	examples/simple-chat/simple-chat.cpp
#	examples/simple/simple.cpp
#	examples/tokenize/tokenize.cpp
#	ggml/CMakeLists.txt
#	ggml/src/ggml-metal/CMakeLists.txt
#	ggml/src/ggml-vulkan/CMakeLists.txt
#	scripts/sync-ggml.last
#	src/llama.cpp
#	tests/test-autorelease.cpp
#	tests/test-model-load-cancel.cpp
#	tests/test-tokenizer-0.cpp
#	tests/test-tokenizer-1-bpe.cpp
#	tests/test-tokenizer-1-spm.cpp
2025-01-08 23:15:21 +08:00
..
android build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
clip.cpp temporarily make qwenv2l use clip on cpu for vulkan and macos 2024-12-21 09:15:31 +08:00
clip.h temporarily make qwenv2l use clip on cpu for vulkan and macos 2024-12-21 09:15:31 +08:00
convert_image_encoder_to_gguf.py ci : reduce severity of unused Pyright ignore comments (#9697) 2024-09-30 14:13:16 -04:00
llava-cli.cpp llama : update llama_model API names (#11063) 2025-01-06 10:55:18 +02:00
llava.cpp llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
llava.h llava : support MiniCPM-V-2.5 (#7599) 2024-08-09 13:33:53 +03:00
llava_surgery.py py : switch to snake_case (#8305) 2024-07-05 07:53:33 +03:00
llava_surgery_v2.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
minicpmv-cli.cpp llama : update llama_model API names (#11063) 2025-01-06 10:55:18 +02:00
minicpmv-convert-image-encoder-to-gguf.py llava : support MiniCPM-V-2.6 (#8967) 2024-08-16 16:34:41 +03:00
minicpmv-surgery.py llava : support MiniCPM-V-2.6 (#8967) 2024-08-16 16:34:41 +03:00
quantclip.cpp better quant clip 2024-08-18 22:15:59 +08:00
qwen2_vl_surgery.py llava : Allow locally downloaded models for QwenVL (#10833) 2024-12-15 21:43:25 +01:00
qwen2vl-cli.cpp llama : update llama_model API names (#11063) 2025-01-06 10:55:18 +02:00
README-minicpmv2.5.md Fix minicpm example directory (#9111) 2024-08-27 14:33:08 +02:00
README-minicpmv2.6.md llava : support MiniCPM-V-2.6 (#8967) 2024-08-16 16:34:41 +03:00
requirements.txt py : fix requirements check '==' -> '~=' (#8982) 2024-08-12 11:02:01 +03:00